Planet Raku

Raku RSS Feeds

Roman Baumer (Freenode: rba #raku or ##raku-infra) / 2021-04-13T10:19:18


Rakudo Weekly News: 2021.14 Floating Rats

Published by liztormato on 2021-04-05T14:36:48

Steve Roe has published an interesting blog post about how the Raku Programming Language is holding up under the Muller Extreme Challenge, an algorithm designed to break computational accuracy: raku:34 python:19 extreme math. This in turn caused a discussion on /r/rakulang and in the end a Pull Request for a new feature!

TPaRCitC 2021 Call For Papers

The Perl and Raku Conference in the Cloud 2021 will happen on 8 June, 2021. And the Call for Papers is open now! Be sure to submit your Raku presentations!

PostgreSQL Connectivity

Luca Ferrari looks at connecting to a PostgreSQL database from the Raku Programming Language in: A glance at Raku connectivity towards PostgreSQL.

Installing on Ubuntu

Damian A. has published a tutorial on how to install Rakudo on Ubuntu 20.04.

Weeklies

Weekly Challenge #107 is available for your perusal. And this week’s “What’s everyone working on (2021.14)” as well.

Pull Requests

Please check them out and leave any comments that you may have!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on the mailing list

Comments about Raku

Updated Raku Modules

Winding down

Some very cool core developments this week, in stability, ease of maintenance and number of new features! And quite a number of updated modules as well in this week’s Rakudo Weekly News! Stay healthy and safe, please, for another instalment next week!

p6steve: raku:34 python:19 extreme math

Published by p6steve on 2021-04-02T17:56:04

Coming off the excellent raku weekly news, my curiosity was piqued by a tweet about big-endian smells that referenced a blog about “extreme math”. After getting my fill of COBOL mainframe nostalgia, the example of Muller’s Recurrence got me thinking.

The simple claim made in the tweet thread was:

Near the end it [the blog] states that no modern language has fixed point, but Raku (formerly Perl6) has a built in rational type which is quite an interesting comparison. It keeps two integers for the numerator and the denominator and no loss of precision occurs.

I have also covered some of the benefits of the raku approach to math in a previous blog Machine Math and Raku, often the example given is 0.1 + 0.2 =>0.3 which trips up a lot of languages. I like this example, but I am not entirely convinced by it – sure it can be odd when a programming newbie sees a slightly different result caused by floating point conversions – but it is too mickey mouse to be a serious concern.

The Muller Extreme Challenge

This challenge starts with seemingly innocuous equations and quickly descends into very substantial errors. To quote from the Technical Archaelogist blog:

Jean-Michel Muller is a French computer scientist with perhaps the best computer science job in the world. He finds ways to break computers using math. I’m sure he would say he studies reliability and accuracy problems, but no no no: He designs math problems that break computers. One such problem is his recurrence formula. Which looks something like this:

That doesn’t look so scary does it? The recurrence problem is useful for our purposes because:

And here’s a quick python script that produces floating point and fixed point versions of Muller’s Recurrence side by side:

from decimal import Decimal

def rec(y, z):
 return 108 - ((815-1500/z)/y)
 
def floatpt(N):
 x = [4, 4.25]
 for i in range(2, N+1):
  x.append(rec(x[i-1], x[i-2]))
 return x
 
def fixedpt(N):
 x = [Decimal(4), Decimal(17)/Decimal(4)]
 for i in range(2, N+1):
  x.append(rec(x[i-1], x[i-2]))
 return x

N = 30
flt = floatpt(N)
fxd = fixedpt(N)

for i in range(N):
 print( str(i) + ' | '+str(flt[i])+' | '+str(fxd[I]) )

Which gives us the following output:

i  | floating pt    | fixed pt
-- | -------------- | ---------------------------
0  | 4              | 4
1  | 4.25           | 4.25
2  | 4.47058823529  | 4.4705882352941176470588235
3  | 4.64473684211  | 4.6447368421052631578947362
4  | 4.77053824363  | 4.7705382436260623229461618
5  | 4.85570071257  | 4.8557007125890736342039857
6  | 4.91084749866  | 4.9108474990827932004342938
7  | 4.94553739553  | 4.9455374041239167246519529
8  | 4.96696240804  | 4.9669625817627005962571288
9  | 4.98004220429  | 4.9800457013556311118526582
10 | 4.9879092328   | 4.9879794484783912679439415
11 | 4.99136264131  | 4.9927702880620482067468253
12 | 4.96745509555  | 4.9956558915062356478184985
13 | 4.42969049831  | 4.9973912683733697540253088
14 | -7.81723657846 | 4.9984339437852482376781601
15 | 168.939167671  | 4.9990600687785413938424188
16 | 102.039963152  | 4.9994358732880376990501184
17 | 100.099947516  | 4.9996602467866575821700634
18 | 100.004992041  | 4.9997713526716167817979714
19 | 100.000249579  | 4.9993671517118171375788238
20 | 100.00001247862016 | 4.9897059157620938291040004
21 | 100.00000062392161 | 4.7951151851630947311130380
22 | 100.0000000311958  | 0.7281074924258006736651754
23 | 100.00000000155978 | -581.7081261405031229400219627
24 | 100.00000000007799 | 105.8595186892360167901632650
25 | 100.0000000000039  | 100.2767586430669099906187869
26 | 100.0000000000002  | 100.0137997241561168045699158
27 | 100.00000000000001 | 100.0006898905241097140861868
28 | 100.0 | 100.0000344942738135445216746
29 | 100.0 | 100.0000017247126631766583580
30 | 100.0 | 100.0000000862356186943169827

Up until about the 12th iteration the rounding error seems more or less negligible but things quickly go off the rails. Floating point math converges around a number twenty times the value of what the same calculation with fixed point math produces.

Least you think it is unlikely that anyone would do a recursive calculation so many times over. This is exactly what happened in 1991 when the Patriot Missile control system miscalculated the time and killed 28 people. And it turns out floating point math has blown lots of stuff up completely by accident. Mark Stadtherr gave an incredible talk about this called High Performance Computing: are we just getting wrong answers faster? You should read it if you want more examples and a more detailed history of the issue than I can offer here.

[endquote]

So, basically, python Float dies at iteration #12 and python Fixed/Decimal dies at iteration #19. According to the source text COBOL dies at iteration #18. Then the argument focuses on the need for the Decimal library.

How does raku Measure Up?

I do not buy the no loss of precision occurs claim made on twitter beyond the simpler examples, but I do think that Rats should fare well in the face of this kind of challenge. Here’s my code with raku default math:

my \N = 30;
my \x = []; 
x[0] = 4; 
x[1] = 4.25;

sub f(\y,\z) { 
    108 - ( (815 - 1500/z ) / y ) }

for 2..N -> \i { 
    x[i] = f(x[i-1],x[i-2])   }

for 0..N -> \i {
    say( i ~ ' | ' ~ x[i] )   }

Quick impression is that raku is a little more faithful to the mathematical description and a little less cramped than the python.

The raku output gives:

0 | 4
1 | 4.25
2 | 4.470588
3 | 4.644737
4 | 4.770538
5 | 4.855701
6 | 4.910847
7 | 4.945537
8 | 4.9669626
9 | 4.9800457
10 | 4.98797945
11 | 4.992770288
12 | 4.9956558915
13 | 4.9973912684
14 | 4.99843394394
15 | 4.999060071971
16 | 4.999435937147
17 | 4.9996615241038
18 | 4.99979690071342
19 | 4.99987813547793
20 | 4.9999268795046
21 | 4.9999561270611577
22 | 4.99997367600571244
23 | 4.99998420552027271
24 | 4.999990523282227659
25 | 4.9999943139585595936
26 | 4.9999965883712560237
27 | 4.99999795302135690799
28 | 4.999998771812315
29 | 4.99999926308729
30 | 4.999999557853926

So, 30 iterations with no loss of precision – and with the native raku math defaults. Nice!

Eventually raku breaks at 34 iterations, so raku:34, python:19.

~p6steve

PS. And to reflect the harsh reality of life, Victor Ejikhout’s comment can have the final word: so know your own limits!

This is not a problem of fixed point vs floating point. I think your examples favor Fix because you give it more digit of accuracy. What would happen if you used a Float format where the mantissa is equally long as the total Fix length? Objection #2: I think Cobol / Fix would converge away from 5 if you ran more iterations. The Muller equation has three fixed points: x_n==3, x_n==5, and x_n==100. If you start close enough to 5 it will converge there for a while, but (I’m guessing here; didn’t run all the tests) it will converge to the 100 solution. Since you give the float solution less precision it simply converges there faster.The only real lesson here is not to code unstable recursions.

Pawel bbkr Pabian: Asynchronous, parallel and... dead. My Perl 6 daily bread.

Published by Pawel bbkr Pabian on 2015-09-06T14:00:56

I love Perl 6 asynchronous features. They are so easy to use and can give instant boost by changing few lines of code that I got addicted to them. I became asynchronous junkie. And finally overdosed. Here is my story...

I was processing a document that was divided into chapters, sub-chapters, sub-sub-chapters and so on. Parsed to data structure it looked like this:

    my %document = (
        '1' => {
            '1.1' => 'Lorem ipsum',
            '1.2' => {
                '1.2.1' => 'Lorem ipsum',
                '1.2.2' => 'Lorem ipsum'
            }
        },
        '2' => {
            '2.1' => {
                '2.1.1' => 'Lorem ipsum'
            }
        }
    );

Every chapter required processing of its children before it could be processed. Also processing of each chapter was quite time consuming - no matter which level it was and how many children did it have. So I started by writing recursive function to do it:

    sub process (%chapters) {
        for %chapters.kv -> $number, $content {
            note "Chapter $number started";
            &?ROUTINE.($content) if $content ~~ Hash;
            sleep 1; # here the chapter itself is processed
            note "Chapter $number finished";
        }
    }
    
    process(%document);

So nothing fancy here. Maybe except current &?ROUTINE variable which makes recursive code less error prone - there is no need to repeat subroutine name explicitly. After running it I got expected DFS (Depth First Search) flow:

    $ time perl6 run.pl
    Chapter 1 started
    Chapter 1.1 started
    Chapter 1.1 finished
    Chapter 1.2 started
    Chapter 1.2.1 started
    Chapter 1.2.1 finished
    Chapter 1.2.2 started
    Chapter 1.2.2 finished
    Chapter 1.2 finished
    Chapter 1 finished
    Chapter 2 started
    Chapter 2.1 started
    Chapter 2.1.1 started
    Chapter 2.1.1 finished
    Chapter 2.1 finished
    Chapter 2 finished
    
    real    0m8.184s

It worked perfectly, but that was too slow. Because 1 second was required to process each chapter in serial manner it ran for 8 seconds total. So without hesitation I reached for Perl 6 asynchronous goodies to process chapters in parallel.

    sub process (%chapters) {
        await do for %chapters.kv -> $number, $content {
            start {
                note "Chapter $number started";
                &?ROUTINE.outer.($content) if $content ~~ Hash;
                sleep 1; # here the chapter itself is processed
                note "Chapter $number finished";
            }
        }
    }
    
    process(%document);

Now every chapter is processed asynchronously in parallel and first waits for its children to be also processed asynchronously in parallel. Note that after wrapping processing in await/start construct &?ROUTINE must now point to outer scope.

    $ time perl6 run.pl
    Chapter 1 started
    Chapter 2 started
    Chapter 1.1 started
    Chapter 1.2 started
    Chapter 2.1 started
    Chapter 1.2.1 started
    Chapter 2.1.1 started
    Chapter 1.2.2 started
    Chapter 1.1 finished
    Chapter 1.2.1 finished
    Chapter 1.2.2 finished
    Chapter 2.1.1 finished
    Chapter 2.1 finished
    Chapter 1.2 finished
    Chapter 1 finished
    Chapter 2 finished
    
    real    0m3.171s

Perfect. Time dropped to expected 3 seconds - it was not possible to go any faster because document had 3 nesting levels and each required 1 second to process. Still smiling I threw bigger document at my beautiful script - 10 chapters, each with 10 sub-chapters, each with 10 sub-sub-chapters. It started processing, run for a while... and DEADLOCKED.

Friedrich Nietzsche said that "when you gaze long into an abyss the abyss also gazes into you". Same rule applies to code. After few minutes me and my code were staring at each other. And I couldn't find why it worked perfectly for small documents but was deadlocking in random moments for big ones. Half an hour later I blinked and got defeated by my own code in staring contest. So it was time for debugging.

I noticed that when it was deadlocking there was always constant amount of 16 chapters that were still in progress. And that number looked familiar to me - thread pool!

    $ perl6 -e 'say start { }'
    Promise.new(
        scheduler => ThreadPoolScheduler.new(
            initial_threads => 0,
            max_threads => 16,
            uncaught_handler => Callable
        ),
        status => PromiseStatus::Kept
    )

Every asynchronous task that is planned needs free thread so it can be executed. And on my system only 16 concurrent threads are allowed as shown above. To analyze what happened let's use document from first example but also assume thread pool is limited to 4:

    $ perl6 run.pl          # 4 threads available by default
    Chapter 1 started       # 3 threads available
    Chapter 1.1 started     # 2 threads available
    Chapter 2 started       # 1 thread available
    Chapter 1.1 finished    # 2 threads available again
    Chapter 1.2 started     # 1 thread available
    Chapter 1.2.1 started   # 0 threads available
                            # deadlock!

At this moment chapter 1 subtree holds three threads and waits for one more for chapter 1.2.2 to complete everything and start ascending from recursion. And subtree of chapter 2 holds one thread and waits for one more for chapter 2.1 to descend into recursion. In result processing gets to a point where at least one more thread is required to proceed but all threads are taken and none can be returned to thread pool. Script deadlocks and stops here forever.

How to solve this problem and maintain parallel processing? There are many ways to do it :)
The key to the solution is to process asynchronously only those chapters that do not have unprocessed chapters on lower level.

Luckily Perl 6 offers perfect tool - promise junctions. It is possible to create a promise that waits for other promises to be kept and until it happens it is not sent to thread pool for execution. Following code illustrates that:

    my $p = Promise.allof( Promise.in(2), Promise.in(3) );
    sleep 1;
    say "Promise after 1 second: " ~ $p.perl;
    sleep 3;
    say "Promise after 4 seconds: " ~ $p.perl;

Prints:

    Promise after 1 second: Promise.new(
        ..., status => PromiseStatus::Planned
    )
    Promise after 4 seconds: Promise.new(
        ..., status => PromiseStatus::Kept
    )

Let's rewrite processing using this cool property:

    sub process (%chapters) {
        return Promise.allof(
            do for %chapters.kv -> $number, $content {
                my $current = {
                    note "Chapter $number started";
                    sleep 1; # here the chapter itself is processed
                    note "Chapter $number finished";
                };
                
                if $content ~~ Hash {
                    Promise.allof( &?ROUTINE.($content) )
                        .then( $current );
                }
                else {
                    Promise.start( $current );
                }
            }
        );
    }
    
    await process(%document);

It solves the problem when chapter was competing with its sub-chapters for free threads but at the same time it needed those sub-chapters before it can process itself. Now awaiting for sub-chapters to complete does not require free thread. Let's run it:

    $ perl6 run.pl
    Chapter 1.1 started
    Chapter 1.2.1 started
    Chapter 1.2.2 started
    Chapter 2.1.1 started
    -
    Chapter 1.1 finished
    Chapter 1.2.1 finished
    Chapter 1.2.2 finished
    Chapter 1.2 started
    Chapter 2.1.1 finished
    Chapter 2.1 started
    -
    Chapter 1.2 finished
    Chapter 1 started
    Chapter 2.1 finished
    Chapter 2 started
    -
    Chapter 1 finished
    Chapter 2 finished
    
    real    0m3.454s

I've added separator for each second passed so it is easier to understand. When script starts chapters 1.1, 1.2.1, 1.2.2 and 2.1.1 do not have sub-chapters at all. So they can take threads from thread pool immediately. When they are completed after one second then Promises that were awaiting for all of them are kept and chapters 1.2 and 2.1 can be processed safely on thread pool. It keeps going until getting out of recursion.

After trying big document again it was processed flawlessly in 72 seconds instead of linear 1000.

I'm high on asynchronous processing again!

You can download script here and try different data sizes and algorithms for yourself (params are taken from command line).

Rakudo Weekly News: 2021.13 Games Pop

Published by liztormato on 2021-03-29T21:16:38

JJ Atria just announced another part of the Raku Programming Language’s growing support for games: POP, an experimental 2D game development framework. It has been inspired by frameworks like LÓVE2D and Pico-8, and by others of a similar nature. Meanwhile, it turns out that this work is complementary to Geoffrey Broadwell‘s MUGS (Multi-User Gaming Services) project. Which made it the right time to start a new IRC channel dedicated to game development using the Raku Programming Language: #raku-gamedev on Freenode.

First Raku-CI Bot Grant Report

Patrick Böker has published their first grant report for the Raku Continuous Integration Bot Grant: it gives an introduction to Patrick, the problem space, the plan and the next steps to be taken. It’s cool to see the project taking off (/r/rakulang comments)!

Season of Docs

JJ Merelo has initiated a formal request to improve the documentation of the Raku Programming Language for the Google Season of Docs (/r/rakulang comments).

Weeklies

Weekly Challenge #106 is available for your perusal. And this week’s “What’s everyone working on (2021.13)” as well.

Pull Requests

Please check them out and leave any comments that you may have!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on the mailing list

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

More gaming! Again, it’s good to see Raku being used for fun and profit. Please check in again next week for another instalment of the Rakudo Weekly News! Healthy and safe, please!

Rakudo Weekly News: 2021.12 Games Begin

Published by liztormato on 2021-03-22T14:44:39

Longtime Raku developer Geoffrey Broadwell has released a platform for game service development called MUGS (Multi-User Gaming Services). It is intended for creating client-server and multi-user games by abstracting away the boilerplate of managing player identities, tracking active games and sessions, sending and receiving messages and actions, and so forth.

It is based on Cro, a set of Raku libraries for building reactive distributed systems. And already at release 0.1.0! Great to see yet another cool development in the Raku Programming Language! (/r/rakulang comments).

Rakudo Compiler 2021.03 released

Alexander Kiryuhin has done it again. Another Rakudo Compiler Release on the dot, like clockwork: Rakudo 2021.03, with some nice additions, fixes and performance improvements. Binary packages are already available: John Longwalker even blogged about them in “Overdue Appreciation for rakudo-pkg“.

Comma in (p)review

Jonathan Worthington looked back on a year of Comma (the Raku Programming Language IDE) and what can be expected of Comma in 2021 in Comma in 2020: a year in review. With some cool animations and examples of amazing Comma features (/r/rakulang comments).

February Grant Report

Jonathan Worthington also reported on their work in February 2021 on the new dispatch mechanism for the Raku Development Grant (/r/rakulang comments).

Covid Observer

Thanks to the inattentiveness of yours truly, the first year anniversary of Andrew Shitov‘s Covid Observer was missed in last week’s Rakudo Weekly News. My apologies! Please use the “weekly: message” feature on the #raku IRC channel to make sure your announcements won’t be missed.

Alternate Raku website

CIAvash has made their proposal for a new Raku website public, which also contains a Persian language version (/r/rakulang comments).

Improved Emacs support

CIAvash also blogged about their improved flycheck-raku for melpa (/r/rakulang comments).

Weeklies

Weekly Challenge #105 is available for your perusal. And this week’s “What’s everyone working on (2021.12)” as well.

Pull Requests

Please check them out and leave any comments that you may have!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on the mailing list

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

Again, an action packed week at the Rakudo Weekly News! And since gaming is serious business, it’s good to see Raku being used for fun and profit. Please continue to stay safe and stay healthy! If not for yourself and your loved ones, then for another instalment of the Rakudo Weekly News next week!

Rakudo Weekly News: 2021.11 Two Year Itch

Published by liztormato on 2021-03-15T19:10:11

This week saw the return of not one, but two long time contributors with an extended blog post after a 2 year hiatus: one of them technical, the other more philosophical. It’s great to see these types of blog posts happen again! Yours truly is happy that it didn’t turn out to be a Seven Year Itch!

Towards a new general dispatch mechanism

Jonathan Worthington returns with a large technical blog post about the problem of implementing all of the various dispatch capabilities of the Raku Programming Language in a more efficient way. The article goes into what dispatch actually means, and why it matters to make it as fast and flexible as possible, and its relation to the RakuAST work. The good news in this, is that most of the discussed features are actually already implemented. In short:

The work that remains is enumerable, and the day we ship a Rakudo and MoarVM release using the new dispatcher feels a small number of months away.

No pressure! (/r/rakulang comments)

Why bother with scripting?

Bart Wiegmans returns with a large, more philosophical blog post about the contrast between scripting and programming, between scripts and applications. And how this affects the preferred features of a programming language. About Raku:

a language can be introduced for scripts and some of those scripts are then extended into applications (or even systems), thereby ensuring its continued usage

Thought provoking! (/r/rakulang comments)

Get Started with Terminal-UI

Brian Duggan has published a nice introduction into their Terminal::UI module, which shows the remarkable simplicity with which one can create fully interactive terminal applications with the Raku Programming Language (/r/rakulang, /r/commandline comments).

A match for *

Wenzel P.P. Peppmeyer elaborates on a question asked on the #raku IRC channel (by new Raku user PimDaniel) on how to turn a Match object into a lazy list: Raku is a match for * (/r/rakulang comments).

A Raku DevRoom at esLibre

JJ Merelo has been able to secure a Raku Devroom at the esLibre online event on 25/26 June 2021, one could argue to be the Spanish version of FOSDEM. The theme of the devroom is “A language for the 21st century“. Even though the most used natural language is Spanish, presentations in English will also be considered. So please, warm up your presentation ideas and submit them to JJ Merelo!

Raku Magic

Julia, a pretty recent Raku Programming Language aficionado, launched the idea of tweeting small pieces of Raku code for fun, relaxation and potentially profit! Just add your snippet, wait a little while, and it should then appear on Twitter! Yours truly could not resist!

Module documentation

About two weeks ago, Richard Hainsworth started a discussion about the documentation of Raku ecosystem modules, which was rekindled by Luis F. Uceta on /r/rakulang. Please leave any thoughts that you may have on the subject, either on the mailing list or on the /r/rakulang subreddit.

Raku documentation

Meanwhile, Alexander Kiryuhin has been working on a new version of the Raku Programming Language documentation web site (temporary URL). Although a lot of work has been done already, Alexander is specifically inviting your participation in the areas of JavaScript, to create a more immersive experience when looking at the documentation. Please check out the current issues that need solving before this could go live (/r/rakulang comments).

What’s everyone working on?

Daniel Sockwell stole an idea from other programming language communities: a free-form update on what people have worked on in the past week, with a focus on the Raku Programming Language of course. The first issue can be found on /r/rakulang, hopefully there will be many more to come!

Which scripting language should I continue to learn?

ItchyPlant asked that very question with regards to Raku vs Javascript vs PHP vs Python on /r/programming. With quite a few nice comments and suggestions, also in /r/rakulang.

7 Years of Raku performance improvements

Daniel Sockwell had their attention drawn to the benchmarks that H. Merijn Brand has been running for the past 7 years now (a subset of Text::CSV functionality).

RSC Meeting Minutes

Now that the Raku Steering Council has settled on more organized, bi-weekly meetings, it seemed like a good idea to actually write minutes of these meetings and publish them. These are the minutes of the RSC meeting on 6 March 2021.

Weeklies

Weekly Challenge #104 is available for your perusal.

Pull Requests

Please check them out and leave any comments that you may have!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on the mailing list

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

Wow, what a Weekly! Good to see so many new users and new efforts to bring the joy of the Raku Programming Language to more people in the world! Hope you liked it. Meanwhile, stay safe and stay healthy for another instalment of the Rakudo Weekly News next week!

6guts: Towards a new general dispatch mechanism in MoarVM

Published by jnthnwrthngtn on 2021-03-15T02:08:42

My goodness, it appears I’m writing my first Raku internals blog post in over two years. Of course, two years ago it wasn’t even called Raku. Anyway, without further ado, let’s get on with this shared brainache.

What is dispatch?

I use “dispatch” to mean a process by which we take a set of arguments and end up with some action being taken based upon them. Some familiar examples include:

At first glance, perhaps the first two seem fairly easy and the third a bit more of a handful – which is sort of true. However, Raku has a number of other features that make dispatch rather more, well, interesting. For example:

Thanks to this, dispatch – at least in Raku – is not always something we do and produce an outcome, but rather a process that we may be asked to continue with multiple times!

Finally, while the examples I’ve written above can all quite clearly be seen as examples of dispatch, a number of other common constructs in Raku can be expressed as a kind of dispatch too. Assignment is one example: the semantics of it depend on the target of the assignment and the value being assigned, and thus we need to pick the correct semantics. Coercion is another example, and return value type-checking yet another.

Why does dispatch matter?

Dispatch is everywhere in our programs, quietly tieing together the code that wants stuff done with the code that does stuff. Its ubiquity means it plays a significant role in program performance. In the best case, we can reduce the cost to zero. In the worst case, the cost of the dispatch is high enough to exceed that of the work done as a result of the dispatch.

To a first approximation, when the runtime “understands” the dispatch the performance tends to be at least somewhat decent, but when it doesn’t there’s a high chance of it being awful. Dispatches tend to involve an amount of work that can be cached, often with some cheap guards to verify the validity of the cached outcome. For example, in a method dispatch, naively we need to walk a linearization of the inheritance graph and ask each class we encounter along the way if it has a method of the specified name. Clearly, this is not going to be terribly fast if we do it on every method call. However, a particular method name on a particular type (identified precisely, without regard to subclassing) will resolve to the same method each time. Thus, we can cache the outcome of the lookup, and use it whenever the type of the invocant matches that used to produce the cached result.

Specialized vs. generalized mechanisms in language runtimes

When one starts building a runtime aimed at a particular language, and has to do it on a pretty tight budget, the most obvious way to get somewhat tolerable performance is to bake various hot-path language semantics into the runtime. This is exactly how MoarVM started out. Thus, if we look at MoarVM as it stood several years ago, we find things like:

These are all still there today, however are also all on the way out. What’s most telling about this list is what isn’t included. Things like:

A few years back I started to partially address this, with the introduction of a mechanism I called “specializer plugins”. But first, what is the specializer?

When MoarVM started out, it was a relatively straightforward interpreter of bytecode. It only had to be fast enough to beat the Parrot VM in order to get a decent amount of usage, which I saw as important to have before going on to implement some more interesting optimizations (back then we didn’t have the kind of pre-release automated testing infrastructure we have today, and so depended much more on feedback from early adopters). Anyway, soon after being able to run pretty much as much of the Raku language as any other backend, I started on the dynamic optimizer. It gathered type statistics as the program was interpreted, identified hot code, put it into SSA form, used the type statistics to insert guards, used those together with static properties of the bytecode to analyze and optimize, and produced specialized bytecode for the function in question. This bytecode could elide type checks and various lookups, as well as using a range of internal ops that make all kinds of assumptions, which were safe because of the program properties that were proved by the optimizer. This is called specialized bytecode because it has had a lot of its genericity – which would allow it to work correctly on all types of value that we might encounter – removed, in favor of working in a particular special case that actually occurs at runtime. (Code, especially in more dynamic languages, is generally far more generic in theory than it ever turns out to be in practice.)

This component – the specializer, known internally as “spesh” – delivered a significant further improvement in the performance of Raku programs, and with time its sophistication has grown, taking in optimizations such as inlining and escape analysis with scalar replacement. These aren’t easy things to build – but once a runtime has them, they create design possibilities that didn’t previously exist, and make decisions made in their absence look sub-optimal.

Of note, those special-cased language-specific mechanisms, baked into the runtime to get some speed in the early days, instead become something of a liability and a bottleneck. They have complex semantics, which means they are either opaque to the optimizer (so it can’t reason about them, meaning optimization is inhibited) or they need special casing in the optimizer (a liability).

So, back to specializer plugins. I reached a point where I wanted to take on the performance of things like $obj.?meth (the “call me maybe” dispatch), $obj.SomeType::meth() (dispatch qualified with a class to start looking in), and private method calls in roles (which can’t be resolved statically). At the same time, I was getting ready to implement some amount of escape analysis, but realized that it was going to be of very limited utility because assignment had also been special-cased in the VM, with a chunk of opaque C code doing the hot path stuff.

But why did we have the C code doing that hot-path stuff? Well, because it’d be too espensive to have every assignment call a VM-level function that does a bunch of checks and logic. Why is that costly? Because of function call overhead and the costs of interpretation. This was all true once upon a time. But, some years of development later:

I solved the assignment problem and the dispatch problems mentioned above with the introduction of a single new mechanism: specializer plugins. They work as follows:

The vast majority of cases are monomorphic, meaning that only one set of guards are produced and they always succeed thereafter. The specializer can thus compile those guards into the specialized bytecode and then assume the given target invocant is what will be invoked. (Further, duplicate guards can be eliminated, so the guards a particular plugin introduces may reduce to zero.)

Specializer plugins felt pretty great. One new mechanism solved multiple optimization headaches.

The new MoarVM dispatch mechanism is the answer to a fairly simple question: what if we get rid of all the dispatch-related special-case mechanisms in favor of something a bit like specializer plugins? The resulting mechanism would need to be a more powerful than specializer plugins. Further, I could learn from some of the shortcomings of specializer plugins. Thus, while they will go away after a relatively short lifetime, I think it’s fair to say that I would not have been in a place to design the new MoarVM dispatch mechanism without that experience.

The dispatch op and the bootstrap dispatchers

All the method caching. All the multi dispatch caching. All the specializer plugins. All the invocation protocol stuff for unwrapping the bytecode handle in a code object. It’s all going away, in favor of a single new dispatch instruction. Its name is, boringly enough, dispatch. It looks like this:

dispatch_o result, 'dispatcher-name', callsite, arg0, arg1, ..., argN

Which means:

(Aside: this implies a new calling convention, whereby we no longer copy the arguments into an argument buffer, but instead pass the base of the register set and a pointer into the bytecode where the register argument map is found, and then do a lookup registers[map[argument_index]] to get the value for an argument. That alone is a saving when we interpret, because we no longer need a loop around the interpreter per argument.)

Some of the arguments might be things we’d traditionally call arguments. Some are aimed at the dispatch process itself. It doesn’t really matter – but it is more optimal if we arrange to put arguments that are only for the dispatch first (for example, the method name), and those for the target of the dispatch afterwards (for example, the method parameters).

The new bootstrap mechanism provides a small number of built-in dispatchers, whose names start with “boot-“. They are:

That’s pretty much it. Every dispatcher we build, to teach the runtime about some other kind of dispatch behavior, eventually terminates in one of these.

Building on the bootstrap

Teaching MoarVM about different kinds of dispatch is done using nothing less than the dispatch mechanism itself! For the most part, boot-syscall is used in order to register a dispatcher, set up the guards, and provide the result that goes with them.

Here is a minimal example, taken from the dispatcher test suite, showing how a dispatcher that provides the identity function would look:

nqp::dispatch('boot-syscall', 'dispatcher-register', 'identity', -> $capture {
    nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-value', $capture);
});
sub identity($x) {
    nqp::dispatch('identity', $x)
}
ok(identity(42) == 42, 'Can define identity dispatch (1)');
ok(identity('foo') eq 'foo', 'Can define identity dispatch (2)');

In the first statement, we call the dispatcher-register MoarVM system call, passing a name for the dispatcher along with a closure, which will be called each time we need to handle the dispatch (which I tend to refer to as the “dispatch callback”). It receives a single argument, which is a capture of arguments (not actually a Raku-level Capture, but the idea – an object containing a set of call arguments – is the same).

Every user-defined dispatcher should eventually use dispatcher-delegate in order to identify another dispatcher to pass control along to. In this case, it delegates immediately to boot-value – meaning it really is nothing except a wrapper around the boot-value built-in dispatcher.

The sub identity contains a single static occurrence of the dispatch op. Given we call the sub twice, we will encounter this op twice at runtime, but the two times are very different.

The first time is the “record” phase. The arguments are formed into a capture and the callback runs, which in turn passes it along to the boot-value dispatcher, which produces the result. This results in an extremely simple dispatch program, which says that the result should be the first argument in the capture. Since there’s no guards, this will always be a valid result.

The second time we encounter the dispatch op, it already has a dispatch program recorded there, so we are in run mode. Turning on a debugging mode in the MoarVM source, we can see the dispatch program that results looks like this:

Dispatch program (1 temporaries)
  Ops:
    Load argument 0 into temporary 0
    Set result object value from temporary 0

That is, it reads argument 0 into a temporary location and then sets that as the result of the dispatch. Notice how there is no mention of the fact that we went through an extra layer of dispatch; those have zero cost in the resulting dispatch program.

Capture manipulation

Argument captures are immutable. Various VM syscalls exist to transform them into new argument captures with some tweak, for example dropping or inserting arguments. Here’s a further example from the test suite:

nqp::dispatch('boot-syscall', 'dispatcher-register', 'drop-first', -> $capture {
    my $capture-derived := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg', $capture, 0);
    nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-value', $capture-derived);
});
ok(nqp::dispatch('drop-first', 'first', 'second') eq 'second',
    'dispatcher-drop-arg works');

This drops the first argument before passing the capture on to the boot-value dispatcher – meaning that it will return the second argument. Glance back at the previous dispatch program for the identity function. Can you guess how this one will look?

Well, here it is:

Dispatch program (1 temporaries)
  Ops:
    Load argument 1 into temporary 0
    Set result string value from temporary 0

Again, while in the record phase of such a dispatcher we really do create capture objects and make a dispatcher delegation, the resulting dispatch program is far simpler.

Here’s a slightly more involved example:

my $target := -> $x { $x + 1 }
nqp::dispatch('boot-syscall', 'dispatcher-register', 'call-on-target', -> $capture {
    my $capture-derived := nqp::dispatch('boot-syscall',
            'dispatcher-insert-arg-literal-obj', $capture, 0, $target);
    nqp::dispatch('boot-syscall', 'dispatcher-delegate',
            'boot-code-constant', $capture-derived);
});
sub cot() { nqp::dispatch('call-on-target', 49) }
ok(cot() == 50,
    'dispatcher-insert-arg-literal-obj works at start of capture');
ok(cot() == 50,
    'dispatcher-insert-arg-literal-obj works at start of capture after link too');

Here, we have a closure stored in a variable $target. We insert it as the first argument of the capture, and then delegate to boot-code-constant, which will invoke that code object and pass the other dispatch arguments to it. Once again, at the record phase, we really do something like:

And the resulting dispatch program? It’s this:

Dispatch program (1 temporaries)
  Ops:
    Load collectable constant at index 0 into temporary 0
    Skip first 0 args of incoming capture; callsite from 0
    Invoke MVMCode in temporary 0

That is, load the constant bytecode handle that we’re going to invoke, set up the args (which are in this case equal to those of the incoming capture), and then invoke the bytecode with those arguments. The argument shuffling is, once again, gone. In general, whenever the arguments we do an eventual bytecode invocation with are a tail of the initial dispatch arguments, the arguments transform becomes no more than a pointer addition.

Guards

All of the dispatch programs seen so far have been unconditional: once recorded at a given callsite, they shall always be used. The big missing piece to make such a mechanism have practical utility is guards. Guards assert properties such as the type of an argument or if the argument is definite (Int:D) or not (Int:U).

Here’s a somewhat longer test case, with some explanations placed throughout it.

# A couple of classes for test purposes
my class C1 { }
my class C2 { }

# A counter used to make sure we're only invokving the dispatch callback as
# many times as we expect.
my $count := 0;

# A type-name dispatcher that maps a type into a constant string value that
# is its name. This isn't terribly useful, but it is a decent small example.
nqp::dispatch('boot-syscall', 'dispatcher-register', 'type-name', -> $capture {
    # Bump the counter, just for testing purposes.
    $count++;

    # Obtain the value of the argument from the capture (using an existing
    # MoarVM op, though in the future this may go away in place of a syscall)
    # and then obtain the string typename also.
    my $arg-val := nqp::captureposarg($capture, 0);
    my str $name := $arg-val.HOW.name($arg-val);

    # This outcome is only going to be valid for a particular type. We track
    # the argument (which gives us an object back that we can use to guard
    # it) and then add the type guard.
    my $arg := nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $capture, 0);
    nqp::dispatch('boot-syscall', 'dispatcher-guard-type', $arg);

    # Finally, insert the type name at the start of the capture and then
    # delegate to the boot-constant dispatcher.
    nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-constant',
        nqp::dispatch('boot-syscall', 'dispatcher-insert-arg-literal-str',
            $capture, 0, $name));
});

# A use of the dispatch for the tests. Put into a sub so there's a single
# static dispatch op, which all dispatch programs will hang off.
sub type-name($obj) {
    nqp::dispatch('type-name', $obj)
}

# Check with the first type, making sure the guard matches when it should
# (although this test would pass if the guard were ignored too).
ok(type-name(C1) eq 'C1', 'Dispatcher setting guard works');
ok($count == 1, 'Dispatch callback ran once');
ok(type-name(C1) eq 'C1', 'Can use it another time with the same type');
ok($count == 1, 'Dispatch callback was not run again');

# Test it with a second type, both record and run modes. This ensures the
# guard really is being checked.
ok(type-name(C2) eq 'C2', 'Can handle polymorphic sites when guard fails');
ok($count == 2, 'Dispatch callback ran a second time for new type');
ok(type-name(C2) eq 'C2', 'Second call with new type works');

# Check that we can use it with the original type too, and it has stacked
# the dispatch programs up at the same callsite.
ok(type-name(C1) eq 'C1', 'Call with original type still works');
ok($count == 2, 'Dispatch callback only ran a total of 2 times');

This time two dispatch programs get produced, one for C1:

Dispatch program (1 temporaries)
  Ops:
    Guard arg 0 (type=C1)
    Load collectable constant at index 1 into temporary 0
    Set result string value from temporary 0

And another for C2:

Dispatch program (1 temporaries)
  Ops:
    Guard arg 0 (type=C2)
    Load collectable constant at index 1 into temporary 0
    Set result string value from temporary 0

Once again, no leftovers from capture manipulation, tracking, or dispatcher delegation; the dispatch program does a type guard against an argument, then produces the result string. The whole call to $arg-val.HOW.name($arg-val) is elided, the dispatcher we wrote encoding the knowledge – in a way that the VM can understand – that a type’s name can be considered immutable.

This example is a bit contrived, but now consider that we instead look up a method and guard on the invocant type: that’s a method cache! Guard the types of more of the arguments, and we have a multi cache! Do both, and we have a multi-method cache.

The latter is interesting in so far as both the method dispatch and the multi dispatch want to guard on the invocant. In fact, in MoarVM today there will be two such type tests until we get to the point where the specializer does its work and eliminates these duplicated guards. However, the new dispatcher does not treat the dispatcher-guard-type as a kind of imperative operation that writes a guard into the resultant dispatch program. Instead, it declares that the argument in question must be guarded. If some other dispatcher already did that, it’s idempotent. The guards are emitted once all dispatch programs we delegate through, on the path to a final outcome, have had their say.

Fun aside: those being especially attentive will have noticed that the dispatch mechanism is used as part of implementing new dispatchers too, and indeed, this ultimately will mean that the specializer can specialize the dispatchers and have them JIT-compiled into something more efficient too. After all, from the perspective of MoarVM, it’s all just bytecode to run; it’s just that some of it is bytecode that tells the VM how to execute Raku programs more efficiently!

Dispatch resumption

A resumable dispatcher needs to do two things:

  1. Provide a resume callback as well as a dispatch one when registering the dispatcher
  2. In the dispatch callback, specify a capture, which will form the resume initialization state

When a resumption happens, the resume callback will be called, with any arguments for the resumption. It can also obtain the resume initialization state that was set in the dispatch callback. The resume initialization state contains the things needed in order to continue with the dispatch the first time it is resumed. We’ll take a look at how this works for method dispatch to see a concrete example. I’ll also, at this point, switch to looking at the real Rakudo dispatchers, rather than simplified test cases.

The Rakudo dispatchers take advantage of delegation, duplicate guards, and capture manipulations all having no runtime cost in the resulting dispatch program to, in my mind at least, quite nicely factor what is a somewhat involved dispatch process. There are multiple entry points to method dispatch: the normal boring $obj.meth(), the qualified $obj.Type::meth(), and the call me maybe $obj.?meth(). These have common resumption semantics – or at least, they can be made to provided we always carry a starting type in the resume initialization state, which is the type of the object that we do the method dispatch on.

Here is the entry point to dispatch for a normal method dispatch, with the boring details of reporting missing method errors stripped out.

# A standard method call of the form $obj.meth($arg); also used for the
# indirect form $obj."$name"($arg). It receives the decontainerized invocant,
# the method name, and the the args (starting with the invocant including any
# container).
nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-meth-call', -> $capture {
    # Try to resolve the method call using the MOP.
    my $obj := nqp::captureposarg($capture, 0);
    my str $name := nqp::captureposarg_s($capture, 1);
    my $meth := $obj.HOW.find_method($obj, $name);

    # Report an error if there is no such method.
    unless nqp::isconcrete($meth) {
        !!! 'Error reporting logic elided for brevity';
    }

    # Establish a guard on the invocant type and method name (however the name
    # may well be a literal, in which case this is free).
    nqp::dispatch('boot-syscall', 'dispatcher-guard-type',
        nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $capture, 0));
    nqp::dispatch('boot-syscall', 'dispatcher-guard-literal',
        nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $capture, 1));

    # Add the resolved method and delegate to the resolved method dispatcher.
    my $capture-delegate := nqp::dispatch('boot-syscall',
        'dispatcher-insert-arg-literal-obj', $capture, 0, $meth);
    nqp::dispatch('boot-syscall', 'dispatcher-delegate',
        'raku-meth-call-resolved', $capture-delegate);
});

Now for the resolved method dispatcher, which is where the resumption is handled. First, let’s look at the normal dispatch callback (the resumption callback is included but empty; I’ll show it a little later).

# Resolved method call dispatcher. This is used to call a method, once we have
# already resolved it to a callee. Its first arg is the callee, the second and
# third are the type and name (used in deferral), and the rest are the args to
# the method.
nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-meth-call-resolved',
    # Initial dispatch
    -> $capture {
        # Save dispatch state for resumption. We don't need the method that will
        # be called now, so drop it.
        my $resume-capture := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',
            $capture, 0);
        nqp::dispatch('boot-syscall', 'dispatcher-set-resume-init-args', $resume-capture);

        # Drop the dispatch start type and name, and delegate to multi-dispatch or
        # just invoke if it's single dispatch.
        my $delegate_capture := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',
            nqp::dispatch('boot-syscall', 'dispatcher-drop-arg', $capture, 1), 1);
        my $method := nqp::captureposarg($delegate_capture, 0);
        if nqp::istype($method, Routine) && $method.is_dispatcher {
            nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-multi', $delegate_capture);
        }
        else {
            nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-invoke', $delegate_capture);
        }
    },
    # Resumption
    -> $capture {
        ... 'Will be shown later';
    });

There’s an arguable cheat in raku-meth-call: it doesn’t actually insert the type object of the invocant in place of the invocant. It turns out that it doesn’t really matter. Otherwise, I think the comments (which are to be found in the real implementation also) tell the story pretty well.

One important point that may not be clear – but follows a repeating theme – is that the setting of the resume initialization state is also more of a declarative rather than an imperative thing: there isn’t a runtime cost at the time of the dispatch, but rather we keep enough information around in order to be able to reconstruct the resume initialization state at the point we need it. (In fact, when we are in the run phase of a resume, we don’t even have to reconstruct it in the sense of creating a capture object.)

Now for the resumption. I’m going to present a heavily stripped down version that only deals with the callsame semantics (the full thing has to deal with such delights as lastcall and nextcallee too). The resume initialization state exists to seed the resumption process. Once we know we actually do have to deal with resumption, we can do things like calculating the full list of methods in the inheritance graph that we want to walk through. Each resumable dispatcher gets a single storage slot on the call stack that it can use for its state. It can initialize this in the first step of resumption, and then update it as we go. Or more precisely, it can set up a dispatch program that will do this when run.

A linked list turns out to be a very convenient data structure for the chain of candidates we will walk through. We can work our way through a linked list by keeping track of the current node, meaning that there need only be a single thing that mutates, which is the current state of the dispatch. The dispatch program mechanism also provides a way to read an attribute from an object, and that is enough to express traversing a linked list into the dispatch program. This also means zero allocations.

So, without further ado, here is the linked list (rather less pretty in NQP, the restricted Raku subset, than it would be in full Raku):

# A linked list is used to model the state of a dispatch that is deferring
# through a set of methods, multi candidates, or wrappers. The Exhausted class
# is used as a sentinel for the end of the chain. The current state of the
# dispatch points into the linked list at the appropriate point; the chain
# itself is immutable, and shared over (runtime) dispatches.
my class DeferralChain {
    has $!code;
    has $!next;
    method new($code, $next) {
        my $obj := nqp::create(self);
        nqp::bindattr($obj, DeferralChain, '$!code', $code);
        nqp::bindattr($obj, DeferralChain, '$!next', $next);
        $obj
    }
    method code() { $!code }
    method next() { $!next }
};
my class Exhausted {};

And finally, the resumption handling.

nqp::dispatch('boot-syscall', 'dispatcher-register', 'raku-meth-call-resolved',
    # Initial dispatch
    -> $capture {
        ... 'Presented earlier;
    },
    # Resumption. The resume init capture's first two arguments are the type
    # that we initially did a method dispatch against and the method name
    # respectively.
    -> $capture {
        # Work out the next method to call, if any. This depends on if we have
        # an existing dispatch state (that is, a method deferral is already in
        # progress).
        my $init := nqp::dispatch('boot-syscall', 'dispatcher-get-resume-init-args');
        my $state := nqp::dispatch('boot-syscall', 'dispatcher-get-resume-state');
        my $next_method;
        if nqp::isnull($state) {
            # No state, so just starting the resumption. Guard on the
            # invocant type and name.
            my $track_start_type := nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $init, 0);
            nqp::dispatch('boot-syscall', 'dispatcher-guard-type', $track_start_type);
            my $track_name := nqp::dispatch('boot-syscall', 'dispatcher-track-arg', $init, 1);
            nqp::dispatch('boot-syscall', 'dispatcher-guard-literal', $track_name);

            # Also guard on there being no dispatch state.
            my $track_state := nqp::dispatch('boot-syscall', 'dispatcher-track-resume-state');
            nqp::dispatch('boot-syscall', 'dispatcher-guard-literal', $track_state);

            # Build up the list of methods to defer through.
            my $start_type := nqp::captureposarg($init, 0);
            my str $name := nqp::captureposarg_s($init, 1);
            my @mro := nqp::can($start_type.HOW, 'mro_unhidden')
                ?? $start_type.HOW.mro_unhidden($start_type)
                !! $start_type.HOW.mro($start_type);
            my @methods;
            for @mro {
                my %mt := nqp::hllize($_.HOW.method_table($_));
                if nqp::existskey(%mt, $name) {
                    @methods.push(%mt{$name});
                }
            }

            # If there's nothing to defer to, we'll evaluate to Nil (just don't set
            # the next method, and it happens below).
            if nqp::elems(@methods) >= 2 {
                # We can defer. Populate next method.
                @methods.shift; # Discard the first one, which we initially called
                $next_method := @methods.shift; # The immediate next one

                # Build chain of further methods and set it as the state.
                my $chain := Exhausted;
                while @methods {
                    $chain := DeferralChain.new(@methods.pop, $chain);
                }
                nqp::dispatch('boot-syscall', 'dispatcher-set-resume-state-literal', $chain);
            }
        }
        elsif !nqp::istype($state, Exhausted) {
            # Already working through a chain of method deferrals. Obtain
            # the tracking object for the dispatch state, and guard against
            # the next code object to run.
            my $track_state := nqp::dispatch('boot-syscall', 'dispatcher-track-resume-state');
            my $track_method := nqp::dispatch('boot-syscall', 'dispatcher-track-attr',
                $track_state, DeferralChain, '$!code');
            nqp::dispatch('boot-syscall', 'dispatcher-guard-literal', $track_method);

            # Update dispatch state to point to next method.
            my $track_next := nqp::dispatch('boot-syscall', 'dispatcher-track-attr',
                $track_state, DeferralChain, '$!next');
            nqp::dispatch('boot-syscall', 'dispatcher-set-resume-state', $track_next);

            # Set next method, which we shall defer to.
            $next_method := $state.code;
        }
        else {
            # Dispatch already exhausted; guard on that and fall through to returning
            # Nil.
            my $track_state := nqp::dispatch('boot-syscall', 'dispatcher-track-resume-state');
            nqp::dispatch('boot-syscall', 'dispatcher-guard-literal', $track_state);
        }

        # If we found a next method...
        if nqp::isconcrete($next_method) {
            # Call with same (that is, original) arguments. Invoke with those.
            # We drop the first two arguments (which are only there for the
            # resumption), add the code object to invoke, and then leave it
            # to the invoke or multi dispatcher.
            my $just_args := nqp::dispatch('boot-syscall', 'dispatcher-drop-arg',
                nqp::dispatch('boot-syscall', 'dispatcher-drop-arg', $init, 0),
                0);
            my $delegate_capture := nqp::dispatch('boot-syscall',
                'dispatcher-insert-arg-literal-obj', $just_args, 0, $next_method);
            if nqp::istype($next_method, Routine) && $next_method.is_dispatcher {
                nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-multi',
                        $delegate_capture);
            }
            else {
                nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'raku-invoke',
                        $delegate_capture);
            }
        }
        else {
            # No method, so evaluate to Nil (boot-constant disregards all but
            # the first argument).
            nqp::dispatch('boot-syscall', 'dispatcher-delegate', 'boot-constant',
                nqp::dispatch('boot-syscall', 'dispatcher-insert-arg-literal-obj',
                    $capture, 0, Nil));
        }
    });

That’s quite a bit to take in, and quite a bit of code. Remember, however, that this is only run for the record phase of a dispatch resumption. It also produces a dispatch program at the callsite of the callsame, with the usual guards and outcome. Implicit guards are created for the dispatcher that we are resuming at that point. In the most common case this will end up monomorphic or bimorphic, although situations involving nestings of multiple dispatch or method dispatch could produce a more morphic callsite.

The design I’ve picked forces resume callbacks to deal with two situations: the first resumption and the latter resumptions. This is not ideal in a couple of ways:

  1. It’s a bit inconvenient for those writing dispatch resume callbacks. However, it’s not like this is a particularly common activity!
  2. The difference results in two dispatch programs being stacked up at a callsite that might otherwise get just one

Only the second of these really matters. The reason for the non-uniformity is to make sure that the overwhelming majority of calls, which never lead to a dispatch resumption, incur no per-dispatch cost for a feature that they never end up using. If the result is a little more cost for those using the feature, so be it. In fact, early benchmarking shows callsame with wrap and method calls seems to be up to 10 times faster using the new dispatcher than in current Rakudo, and that’s before the specializer understands enough about it to improve things further!

What’s done so far

Everything I’ve discussed above is implemented, except that I may have given the impression somewhere that multiple dispatch is fully implemented using the new dispatcher, and that is not the case yet (no handling of where clauses and no dispatch resumption support).

Next steps

Getting the missing bits of multiple dispatch fully implemented is the obvious next step. The other missing semantic piece is support for callwith and nextwith, where we wish to change the arguments that are being used when moving to the next candidate. A few other minor bits aside, that in theory will get all of the Raku dispatch semantics at least supported.

Currently, all standard method calls ($obj.meth()) and other calls (foo() and $foo()) go via the existing dispatch mechanism, not the new dispatcher. Those will need to be migrated to use the new dispatcher also, and any bugs that are uncovered will need fixing. That will get things to the point where the new dispatcher is semantically ready.

After that comes performance work: making sure that the specializer is able to deal with dispatch program guards and outcomes. The goal, initially, is to get steady state performance of common calling forms to perform at least as well as in the current master branch of Rakudo. It’s already clear enough there will be some big wins for some things that to date have been glacial, but it should not come at the cost of regression on the most common kinds of dispatch, which have received plenty of optimization effort before now.

Furthermore, NQP – the restricted form of Raku that the Rakudo compiler and other bits of the runtime guts are written in – also needs to be migrated to use the new dispatcher. Only when that is done will it be possible to rip out the current method cache, multiple dispatch cache, and so forth from MoarVM.

An open question is how to deal with backends other than MoarVM. Ideally, the new dispatch mechanism will be ported to those. A decent amount of it should be possible to express in terms of the JVM’s invokedynamic (and this would all probably play quite well with a Truffle-based Raku implementation, although I’m not sure there is a current active effort in that area).

Future opportunities

While my current focus is to ship a Rakudo and MoarVM release that uses the new dispatcher mechanism, that won’t be the end of the journey. Some immediate ideas:

Some new language features may also be possible to provide in an efficient way with the help of the new dispatch mechanism. For example, there’s currently not a reliable way to try to invoke a piece of code, just run it if the signature binds, or to do something else if it doesn’t. Instead, things like the Cro router have to first do a trial bind of the signature, and then do the invoke, which makes routing rather more costly. There’s also the long suggested idea of providing pattern matching via signatures with the when construct (for example, when * -> ($x) {}; when * -> ($x, *@tail) { }), which is pretty much the same need, just in a less dynamic setting.

In closing…

Working on the new dispatch mechanism has been a longer journey than I first expected. The resumption part of the design was especially challenging, and there’s still a few important details to attend to there. Something like four potential approaches were discarded along the way (although elements of all of them influenced what I’ve described in this post). Abstractions that hold up are really, really, hard.

I also ended up having to take a couple of months away from doing Raku work at all, felt a bit crushed during some others, and have been juggling this with the equally important RakuAST project (which will be simplified by being able to assume the presence of the new dispatcher, and also offers me a range of softer Raku hacking tasks, whereas the dispatcher work offers few easy pickings).

Given all that, I’m glad to finally be seeing the light at the end of the tunnel. The work that remains is enumerable, and the day we ship a Rakudo and MoarVM release using the new dispatcher feels a small number of months away (and I hope writing that is not tempting fate!)

The new dispatcher is probably the most significant change to MoarVM since I founded it, in so far as it sees us removing a bunch of things that have been there pretty much since the start. RakuAST will also deliver the greatest architectural change to the Rakudo compiler in a decade. Both are an opportunity to fold years of learning things the hard way into the runtime and compiler. I hope when I look back at it all in another decade’s time, I’ll at least feel I made more interesting mistakes this time around.

brrt to the future: Why bother with Scripting?

Published by Bart Wiegmans on 2021-03-14T14:33:00

Many years back, Larry Wall shared his thesis on the nature of scripting. Since recently even Java gained 'script' support I thought it would be fitting to revisit the topic, and hopefully relevant to the perl and raku language community.

The weakness of Larry's treatment (which, to be fair to the author, I think is more intended to be enlightening than to be complete) is the contrast of scripting with programming. This contrast does not permit a clear separation because scripts are programs. That is to say, no matter how long or short, scripts are written commands for a machine to execute, and I think that's a pretty decent definition of a program in general.

A more useful contrast - and, I think, the intended one - is between scripts and other sorts of programs, because that allows us to compare scripting (writing scripts) with 'programming' (writing non-script programs). And to do that we need to know what other sorts of programs there are.

The short version of that answer is - systems and applications, and a bunch of other things that aren't really relevant to the working programmer, like (embedded) control algorithms, spreadsheets and database queries. (The definition I provided above is very broad, by design, because I don't want to get stuck on boundary questions). Most programmers write applications, some write systems, virtually all write scripts once in a while, though plenty of people who aren't professional programmers also write scripts.

I think the defining features of applications and systems are, respectively:

Consider for instance a mail client (like thunderbird) in comparison to a mailer daemon (like sendmail) - one provides an interface to read and write e-mails (the model) and the other provides functionality to send that e-mail to other servers.

Note that under this (again, broad) definition, libraries are also system software, which makes sense, considering that their users are developers (just as for, say, PostgreSQL) who care about things like performance, reliability, and correctness. Incidentally, libraries as well as 'typical' system software (such as database engines and operating system kernels) tend to be written in languages like C and C++ for much the same reasons.

What then, are the differences between scripts, applications, and systems? I think the following is a good list:

Obviously these distinctions aren't really binary - 'short' versus 'long', 'ad-hoc' versus 'general purpose'  - and can't be used to conclusively settle the question whether something is a script or an application. (If, indeed, that question ever comes up). More important is that for the 10 or so scripts I've written over the past year - some professionally, some not - all or most of these properties held, and I'd be surprised if the same isn't true for most readers. 

And - finally coming at the point that I'm trying to make today - these features point to a specific niche of programs more than to a specific technology (or set of technologies). To be exact, scripts are (mostly) short, custom programs to automate ad-hoc tasks, tasks that are either to specific or too small to develop and distribute another program for.

This has further implications on the preferred features of a scripting language (taken to mean, a language designed to enable the development of scripts). In particular:

As an example of the last point - Python 3 requires users to be exact about the encoding of their input, causing all sorts of trouble for unsuspecting scripters when they accidentally try to read ISO-8551 data as UTF-8, or vice versa. Python 2 did not, and for most scripts - not applications - I actually think that is the right choice.

This niche doesn't always exist. In computing environments where everything of interest is adequately captured by an application, or which lacks the ability to effectively automate ad-hoc tasks (I'm thinking in particular of Windows before PowerShell), the practice of scripting tends to not develop. Similarily, in a modern 'cloud' environment, where system setup is controlled by a state machine hosted by another organization, scripting doesn't really have much of a future.

To put it another way, scripting only thrives in an environment that has a lot of 'scriptable' tasks; meaning tasks for which there isn't already a pre-made solution available, environments that have powerful facilities available for a script to access, and whose users are empowered to automate those tasks. Such qualities are common on Unix/Linux 'workstations' but rather less so on smartphones and (as noted before) cloud computing environments.

Truth be told I'm a little worried about that development. I could point to, and expound on, the development and popularity of languages like go and rust, which aren't exactly scripting languages, or the replacement of Javascript with TypeScript, to make the point further, but I don't think that's necessary. At the same time I could point to the development of data science as a discipline to demonstrate that scripting is alive and well (and indeed perhaps more economically relevant than before).

What should be the conclusion for perl 5/7 and raku? I'm not quite sure, mostly because I'm not quite sure whether the broader perl/raku community would prefer their sister languages to be scripting or application languages. (As implied above, I think the Python community chose that they wanted Python 3 to be an application language, and this was not without consequences to their users). 

Raku adds a number of features common to application languages (I'm thinking of it's powerful type system in particular), continuing a trend that perl 5 arguably pioneered. This is indeed a very powerful strategy - a language can be introduced for scripts and some of those scripts are then extended into applications (or even systems), thereby ensuring its continued usage. But for it to work, a new perl family language must be introduced on its scripting merits, and there must be a plentiful supply of scriptable tasks to automate, some of which - or a combination of which - grow into an application.

For myself, I would like to see scripting have a bright future. Not just because scripting is the most accessible form of programming, but also because an environment that permits, even requires scripting, is one were not all interesting problems have been solved, one where it's users ask it to do tasks so diverse that there isn't an app for that, yet. One where the true potential of the wonderful devices that surround is can be explored.

In such a world there might well be a bright future for scripting.

gfldex: Raku is a match for *

Published by gfldex on 2021-03-11T20:55:22

PimDaniel asked an interesting question.

How do i test match is True while matching : this does NOT work :
if my ($type,$a,$b,$c) = ($v ~~ /^ ('horiz'|'vertic') '_' (\d+) '_' (\d+) '_' (\d+) $/)>>.Str { ... }
Well i made it in 2 times 1/ capture and test the match, 2/ convert the match to Str.

There was no prompt answer and no improvement at all. I couldn’t find a nice way to do this quickly either. In fact it took me the better part of an hour to crack this nut. The main issue here is that a failed match will produce Nil that .Str will complain about. So lets separate boolean check of if and the conversion to Str.

my $a = '1 B';

if $a ~~ /(<digit>) \s (<alpha>)/ -> $_ {
    my ($one, $B) = .deepmap: *.Str;
    say "$one $B";
}
# OUTPUT: 1 B

By forcing the result of the condition expression into the topic, we can run any method on the result of the match, but only if Match.bool returns true. I don’t got a degree in CS* but would be very surprised if Raku-signatures would not turn out to turing complete.

if $a ~~ /(<digit>) \s (<alpha>)/ -> Match (Str() $one, Str() $B) {
    dd $one;
    dd $B;
}
# OUTPUT: "1"
          "B"

The signature of the if block coerces the Match to a list. We pick two elements of it and coerce those to Str. Of cause we could coerce to anything we like based on the position of the captures.

Regexes in Raku are compiled to the same byte code then the rest of the program. In fact grammars are just classes with a funky syntax. That’s why we can run Raku code inside a regex with ease. That means we can turn the whole thing inside out.

my @a = <1 B 3 D 4>;
my @b;

my $hit;

for @a -> $e {
    @b.push: ($e ~~ /(<alpha>) || { next } /).Str;
}

say @b;
# OUTPUT: [B D]

Here we skip the .push if the match does not succeed by skipping the rest of the loop body with next. We could fire any control exception inside the regex. That means we could stick the whole thing into a sub and return the value we are looking for from within the regex.

sub cherry-pick-numeric(Str $matchee) {
    $matchee ~~ m/(<digit>) && { return .Numeric }/;
    Empty
}

@b = do .&cherry-pick-numeric for @a;

dd @b;
# OUTPUT: Array @b = [1, 3, 4]

Raku has been in the making for 10 years. This was an gargantuan task. Now comes the hard bit. We have to take that large language and find all the nice idioms. Good things come to those who wait (on IRC).

*) read: Don’t believe anything I write. You have been warned.

Update:

In truly lazy fashion I came up with a way to turn a match into a lazy list after the work should have been done.

$a = '1B3D4';

my \ll := gather $a ~~ m:g/
      [ <alpha> && { take $/<alpha>.Str } ]
    | [ <digit> && { take $/.<digit>.Numeric } ]
    | [ { say 'step' } ]
/;
say ll[0];
say ll[3];
# OUTPUT: 1
          step
          step
          step
          D

The trick is force the match to run all the way to the end of the string with the :g adverb. This run will be interrupted by a take (by throwing CX::Take) and resumed when the next value is asked from the Seq returned by gather. I don’t know if this is memory efficient thought. There may be a Match instance kept around for each take.

Rakudo Weekly News: 2021.10 Automated Star

Published by liztormato on 2021-03-08T17:01:36

Patrick Spek has announced the release of the Rakudo Star 2021.02.1 package (based on the 2021.02.1 Rakudo Compiler release). This is the first time this has happened using a Github Action workflow. Binary releases are not yet available: like everything in the Raku Programming Language, it is the work of volunteers. To create MacOS and Windows installable packages, a volunteer is needed to create the Github Actions workflow for MacOS and/or Windows! The advantage being that this way, you would only need to do this once instead of for each release! So please, stand up if you have the know-how and time to do it!

February Grant Report

Jonathan Worthington has published their February Grant Report for the RakuAST Grant (/r/rakulang comments).

Thread safe caching

Simon Proctor blogged about a thread safe caching algorithm in the Raku Programming Language, which resulted in quite a discussion on the pros and cons of the various approaches in /r/rakulang.

Through the cracks

It appears yours truly missed the announcement of an interesting application written in Raku last August: Hubtodate by demostanis (/r/rakulang comments). Kudos and apologies to demostanis.

Weeklies

Weekly Challenge #103 is available for your perusal. Mohammad S Anwar also published a Monthly Report.

Pull Requests

Please check them out and leave any comments that you may have!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on the mailing list

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

A week with some significant speedups for some types of workflow in the Raku Programming Language! Hopefully, this was another Rakudo Weekly News with plenty to read and think about. See you next week for another instalment!

gfldex: Undocumented escape hatch

Published by gfldex on 2021-02-28T13:30:23

On my quest to a custom when-statement I did quite a bit of reading. The study of roast and Actions.nqp can lead to great gain in knowledge.

$ less -N S04-statements/given.t
136 # given returns the correct value:
137 {
138      sub ret_test($arg) {
139        given $arg {
140          when "a" { "A" }
141          when "b" { "B" }
142        }
143      }
144
145     is( ret_test("a"), "A", "given returns the correct value (1)" );
146     is( ret_test("b"), "B", "given returns the correct value (2)" );
147 }

As we can see in this example, the spec asks given to return the value provided to succeed. This is an ENODOC. We don’t have to depend on sink to turn the succeed value into a Routines return value.

my $result = do given 'a' {
    CONTROL { default { say 'seen ', .^name } }
    when Str { say 'Str'; succeed('It was a string.'); }
}
dd $result;
# OUTPUT: Str
          Str $result = "It was a string."

It’s a bit odd that we need the do thought, as given will always return at least Nil. The oddity doesn’t stop there. We can get hold of control exceptions. Some of which can return a value. That value is well hidden in nqp-land. Control-exceptions are clearly not an implementation details. So there is no reason for that limitation. Let’s remove it.

given 'a' {
    succeed .Str;
    CONTROL {
        when CX::Succeed {
            use nqp;
            my $vmex := nqp::getattr(nqp::decont($_), Exception, '$!ex');
            my $payload := nqp::getpayload($vmex);
            say 'seen succeed with payload: ', $payload;
        }
        default { say 'seen ', .^name; }
    }
}
# OUTPUT: seen succeed with payload: a

My expedition into nqp-land where started by the discovery, that CX::Succseed and CX::Proceed are swallowed by a hidden monster.

given 'a' {
    CONTROL { default { say 'seen ', .^name } }
    when Str { say 'Str'; succeed('It was a string.'); }
}
# OUTPUT:
$ less -N src/Perl6/Actions.nqp
9932     sub when_handler_helper($when_block) {
9933         unless nqp::existskey(%*HANDLERS, 'SUCCEED') {
9934             %*HANDLERS<SUCCEED> := QAST::Op.new(
9935                 :op('p6return'),
9936                 wrap_return_type_check(
9937                     QAST::Op.new(
9938                         :op('getpayload'),
9939                         QAST::Op.new( :op('exception') )
9940                     ),
9941                     $*DECLARAND) );

The first when or default clause add a fresh handler and only checks for SUCCEED and ignores any CONTROL blocks already present. Given that intercepting X::Control is specced, this is rather surprising.

Alas, adding exception handlers via macros, doesn’t work right now. This is not a pressing issue because macros are subject to change my RakuAST anyway and I might get the desired result with a Slang.

gfldex: Custom when

Published by gfldex on 2021-02-25T09:04:14

I didn’t quite like the syntax of using match in the last post. The commas in the list of its arguments looked strangely out of place. Maybe because my eyes are used to a given block. Sleeping over it helped.

sub accord(&c) { (c(CALLER::<$_>); succeed) if &c.cando(\(CALLER::<$_>)) }

given Err.new(:msg<a>) {
    accord -> Hold (:$key) { put „holding $key“; }
    accord -> Err (:$msg) { warn „ERR: $msg“ }
    default { fail ‚unsupported‘ }
}

This works because accord mimics what when is doing. It does some matching, calls a block when True and adds a succeed (by throwing a control exception) at the end of each block. All given is doing is setting the topic. It also acts as a CALLER so we can access its $_ via a pseudo package. Using the signature of a pointy to do deconstruction is quite powerful. Adding this to CORE might be a good idea.

We may have to change the definition of Rako to: “Raku is a highly composable programming language, where things just fall into place.”

UPDATE:

There are cases where $_ is not a dynamic. Also, succeed is throwing a control exception and the handler for those are added by when or default. This happens at compile time and can’t currently be done with macros. The first problem is solvable with black magic. The latter requires a default-block. I didn’t find a way to provide a sensible error message if that block is missing.

multi sub accord(&c) {
    use nqp;
    $_ := nqp::getlexcaller('$_');
    (c($_); succeed) if &c.cando(\($_))
}

for @possibilities.roll(1) -> $needle {
    given $needle {
        accord -> Hold (:$key) { put „holding $key“; }
        accord -> Err (:$msg) { warn „ERR: $msg“ }
        default { warn ‚unsopported‘ }
    }
}

gfldex: Pattern dispatch

Published by gfldex on 2021-02-24T09:25:54

The ever helpful raiph wished for RakuAST in an answer to a question about pattern matching like it is done in Haskell. It was proposed to use MMD to solve this problem. Doing so and getting a fall-through default was unsolved. Since dispatch simply is pattern matching we just need to do some extra work. In a nutshell, the dispatcher gets a list of functions and a list with arguments. The first function that takes all arguments wins.

class Hold { has $.key; }
class Press { has $.key; }
class Err { has $.msg; }

sub else(&code) { &code }

sub match($needle, *@tests) {
    for @tests.head(*-1) -> &f {
        if &f.cando(\($needle)) {
            return f($needle);
        }
    }
    @tests.tail.();
}

match Hold.new(:key<a>),
    -> Hold (:$key) { put „holding $key“; },
    -> Press (:$key) { put „pressing $key“; },
    -> Err (:$msg) { warn „ERR: $msg“ },
    else { fail ‚unsopported‘ };

The method .cando needs a Capture to tell us if a Routine can be called with a given list of arguments. To create such a Capture we use the literal \($arguments, $go, $here). We don’t test the default at the end. Instead we call that function when no other function matches. Declaring the sub else is just for cosmetics.

Since we are in functional land we can use all the convenient features Raku provides us with.

my &key-matcher = &match.assuming(*,[
        -> Hold (:$key) { put „holding $key“; },
        -> Press (:$key) { put „pressing $key“; },
        -> Err (:$msg) { warn „ERR: $msg“ },
        else { fail ‚unsopported‘ };
]);

sub key-source {
    gather loop {
        sleep 1;
        take (Hold.new(:key<a>), Press.new(:key<b>), Err.new(:msg<WELP!>), 'unsupported').pick;
    }
}

.&key-matcher for key-source;

We have to help .assuming a little to understand slurpies by putting the list of functions in an explicit Array.

There is always a functional way to solve a problem. Sometimes we can even get a neat syntax out of it.

gfldex: Method-ish

Published by gfldex on 2021-02-17T10:27:30

In my last post I once again struggled with augmenting classes from CORE. That struggle wasn’t needed at all as I didn’t change state of the object with the added method. For doing more advanced stuff I might have to. By sticking my hand so deep into the guts of Rakudo I might get myself burned. Since what I want to do is tying my code to changes the compiler anyway, I might as well go all in and decent into nqp-land.

my \j = 1 | 2 | 3;
dd j;
use nqp;
.say for nqp::getattr(j, Junction, '$!eigenstates');
# OUTPUT: any(1, 2, 3)
          1
          2
          3

We can use nqp to get hold of private attributes without adding any methods. That’s a bit unwildy. So let’s do some deboilerplating with a pseudo-method.

sub pry(Mu $the-object is raw) {
    use InterceptAllMethods;

    class Interceptor {
        has Mu $!the-object;
        method ^find_method(Mu \type, Str $name) {
            my method (Mu \SELF:) is raw {
                use nqp;
                my $the-object := nqp::getattr(SELF, Interceptor, '$!the-object');
                nqp::getattr($the-object, $the-object.WHAT, '$!' ~ $name)
            }
        }
    }

    use nqp;
    nqp::p6bindattrinvres(nqp::create(Interceptor), Interceptor, '$!the-object', $the-object);
}

.say for j.&pry.eigenstates;
# OUTPUT: 1
          2
          3

With InterceptAllMethods lizmat changed the behaviour of the class-keyword to allow us to provide a FALLBACK-method that captures anything, including methods inherited from Mu. That in turn allows the object returned by pry to divert any method call to a custom method. In this method we can do whatever we want with the object .&pry is called with.

Since our special object will intercept any call, even those of Mu, we need to find another way to call .new. Since .^ is not a special form of . we can use it to get access to class methods.

sub interceptor(Method $the-method){
    use InterceptAllMethods;
    use nqp;

    sub (Mu $the-object is raw) {
        my class Interceptor {
            has Mu $!the-object;
            has Code $!the-method;

            method ^find_method(Mu \type, Mu:D $name) {
                my method (Mu \SELF: |c) is raw {
                    $!the-method.($!the-object, $name, |c)
                }
            }
            method ^introspect(Mu \type, Mu \obj) {
                my method call-it() is raw {
                    $!the-object
                }
                obj.&call-it;
            }
            method ^new(Mu \type, $the-object!, $the-method) {
                nqp::p6bindattrinvres(
                        nqp::p6bindattrinvres(nqp::create(Interceptor), Interceptor, '$!the-object', $the-object),
                        Interceptor, '$!the-method', $the-method)
            }
        }

            # nqp::p6bindattrinvres(
                #     nqp::p6bindattrinvres(nqp::create(Interceptor), Interceptor, '$!the-object', $the-object),
                #   Interceptor, '$!the-method', $the-method);
        Interceptor.^new($the-object, $the-method)
    }
}

my &first-defined = interceptor(
    my method (Positional \SELF: $name) {
        for SELF.flat -> $e {
            with $e."$name"(|%_) {
                .return
            }
        }
        Nil
    }
);

my $file = <file1.txt file2.txt file3.txt nohup.out>».IO.&first-defined.open(:r);
dd $file;
# OUTPUT: Handle $file = IO::Handle.new(path => IO::Path.new("nohup.out", :SPEC(IO::Spec::Unix), :CWD("/home/dex/projects/raku/tmp")), chomp => Bool::True, nl-in => $["\n", "\r\n"], nl-out => "\n", encoding => "utf8")

The sub interceptor takes a method and returns a sub. If that sub is called like a method, it will forward both the name of the to be called method and the invocant to a custom method. When .&first-defined is called a special object is returned. Let’s have a look what it is.

my \uhhh-special = <a b c>.&first-defined;
dd uhhh-special.^introspect(uhhh-special);
# OUTPUT: ($("a", "b", "c"), method <anon> (Positional \SELF: $name, *%_) { #`(Method|93927752146784) ... })

We have to give .^introspect the object we want to have a look at because its invocant is the type object of the class Interceptor.

Currently there is no way known to me (After all, I know just enough to be really dangerous.) to export and import from EXPORTHOW in the same file. That is unfortunate because lizmat decided to overload the keyword class instead of exporting the special Metamodel::ClassHOW with a different name. If we don’t want or can’t have external dependencies, we can use the MOP to create our type object.

class InterceptHOW is Metamodel::ClassHOW {
    method publish_method_cache(|) { }
}

sub ipry(Mu $the-object is raw) {
    my \Interceptor = InterceptHOW.new_type(:name<Interceptor>);
    Interceptor.^add_attribute(Attribute.new(:name<$!the-object>, :type(Mu), :package(Interceptor)));
    Interceptor.^add_meta_method('find_method',
        my method find_method(Mu \type, Str $name) {
            # say „looking for $name“;
            my method (Mu \SELF:) is raw {
                use nqp;
                my $the-object := nqp::getattr(SELF, Interceptor, '$!the-object');
                nqp::getattr($the-object, $the-object.WHAT, '$!' ~ $name)
            }
    });
    Interceptor.^compose;

    use nqp;
    nqp::p6bindattrinvres(nqp::create(Interceptor), Interceptor, '$!the-object', $the-object);
}

While I wrote this I discovered that .^add_meta_method only works if the method provided to it got the same name as the Str in its first argument. At first I tried an anonymous method which ended up in .^meta_method_table but was never called. I guess this bug doesn’t really matter because this meta-method isn’t documented at all. If I play with dragons I have no right to complain about burns. You will spot that method in the wild in Actions.nqp. There is no magic going on with the class-keyword. Rakudo is just using the MOP to construct type objects.

We can’t overload the assignment operator in Raku. That isn’t really needed because assignment happens by calling a method named STORE. Since we got full control over dispatch, we can intercept any method call including a chain of method calls.

multi sub methodify(%h, :$deeply!) {
    sub interceptor(%h, $parent = Nil){
        use InterceptAllMethods;
        use nqp;

        class Interceptor is Callable {
            has Mu $!the-object;
            has Mu @!stack;

            method ^find_method(Mu \type, Mu:D $name) {
                my method (Mu \SELF: |c) is raw {
                    my @new-stack = @!stack;
                    my $the-object = $!the-object;

                    if $name eq 'STORE' {
                        # workaround for rakudobug#4203
                        $the-object{||@new-stack.head(*-1)}:delete if $the-object{||@new-stack.head(*-1)}:exists;

                        $the-object{||@new-stack} = c;
                        return-rw c
                    } else {
                        @new-stack.push: $name;
                        my \nextlevel = SELF.^new($!the-object, @new-stack, $name);
                        nextlevel
                    }
                }
            }
            method ^introspect(Mu \type, Mu \obj) {
                my method call-it() is raw {
                    $!the-object, @!stack
                }
                obj.&call-it;
            }
            method ^new(Mu \type, $the-object!, @new-stack?, $name?) {
                $name
                    ?? nqp::p6bindattrinvres(
                        nqp::p6bindattrinvres(nqp::create(Interceptor), Interceptor, '$!the-object', $the-object),
                        Interceptor, '@!stack', @new-stack)
                    !! nqp::p6bindattrinvres(nqp::create(Interceptor), Interceptor, '$!the-object', $the-object)
                }
        }

        Interceptor.^new(%h)
    }

    interceptor(%h)
}

my %h2;
my $o2 = methodify(%h2, :deeply);
$o2.a.b = 42;
dd %h2;
$o2.a.b.c = <answer>;
dd %h2;
say $o2.a.b.c;
# OUTPUT: Hash %h2 = {:a(${:b(\(42))})}
          Hash %h2 = {:a(${:b(${:c(\("answer"))})})}
          This type cannot unbox to a native string: P6opaque, Interceptor
            in block <unit> at /home/dex/projects/raku/any-chain.raku line 310

Every time we call a method a new instance of Interceptor is created that stores the name of the previous method. That way we can move along the chain of method calls. Since assignment calls STORE, we can divert the assignment to the Hash we use as an actual data structure. Alas, retrieving values does not work the same way because Raku does not distinguish between method call and FETCH. Here the dragon was stronger then me. I still included this halve failed attempt because I had good use for slippy semi lists. This requires use v6.e.PREVIEW and allowed me to step on a bug. There are likely more of those. So please use the same so we can get all the beasties slain before .e is released into the wild.

Having full control over chains of method calls would be nice. Maybe we can do so with RakuAST.

With the things that do work already we can do a few interesting things. Those pesky exceptions are always slowing our progress. We can defuse them with try but that will break a method call chain.

constant no-argument-given = Mu.new;
sub try(Mu $obj is raw, Mu $alternate-value = no-argument-given) {
    interceptor(my method (Mu \SELF: $name, |c) {
        my $o = SELF;
        my \m = $o.^lookup($name) orelse {
            my $bt = Backtrace.new;
            my $idx = $bt.next-interesting-index($bt.next-interesting-index + 1);
            (X::Method::NotFound.new(:method($name), :typename($o.^name)) but role :: { method vault-backtrace { False }}).throw(Backtrace.new($idx + 1));
        }

        try {
            $o = $o."$name"(|c);
        }
 
        $! ~~ Exception
            ?? $alternate-value.WHICH eqv no-argument-given.WHICH
                ?? $o
                !! $alternate-value
            !! $o
    }).($obj)
}

class C {
    has $.greeting;
    method might-throw { die "Not today love!" }
    method greet { say $.greeting }
}

C.new(greeting => ‚Let's make love!‘).&try.might-throw.greet;
# OUTPUT: Let's make love!

The pseudo-method try will defuse any exception and allows to just carry on with calling methods of C. I have to mark the absence of the optional parameter $alternate-value with a special value because one might actually turn the exception object into Nil.

I’m quite sure there are many more such little helpers waiting to be discovered. There may be a module in the future, hopefully helping to make Raku a good programming language.

p6steve: raku = Easy | Hard

Published by p6steve on 2021-02-09T09:07:28

Larry Wall, the inventor of perl and raku (formerly known as perl6) coined the phrase “making the easy things easy and the hard things possible”. One way this applies is that developers are publishers and|or consumers of code. For example,

In general, this pattern helps system experts do the low level, tricksy stuff (parsers, VMs, threads, optimisers) and domain experts can then employ a higher level abstraction. Each can focus on their specific domain(s) of interest.

This pattern often entails the use of a powerful, low level language in the server, and a quick and flexible language in the client. You know the scene: Javascript and HTML accessing Java Object Oriented business logic and a SQL database with ACID transactions. And this asymmetric architecture has often made good sense, allowing the server to be fine tuned and type checked while still facilitating rapid application development and delivery via a variety of web / application presentations. Rust in general and the recently announced Rust rewrite of Apache come into this category

But, when these specialisations turn into silos, then barriers may arise that can hamper adaptability, speed of delivery and front-to-back consistency. That’s one reason why bridges have arisen between server and client – Javascript and Node.js being one typical response from the market.

Enter raku; like it predecessor perl, raku is a language that can ‘telescope’. From pithy one liners on the command line to deep class model introspection and mutation. Depending on the needs of the situation and the knowledge of the developer. So, raku combines an approachable on-ramp for less experienced coders and it offers power developers the keys they need to open up and adapt underlying structures to fit specialised requirements. Raku can even inline low level code (C, C++) where the limits of the language are reached. A reboot of the original perl philosophy of “making the easy things easy and the hard things possible”.

Those that follow my blog will know I am the author of Physics::Unit and Physics::Measure modules. I’m now working on Physics::Navigation — inspired by a recent theory course. This is a domain specific class library that consumes Physics::Measure and provides abstractions such as Latitude, Longitude and Bearing. My aim is primarily to have a piece of code that exercises the underlying modules in order to road test the API and to have some fun. One great benefit is that I can use it explore the clarity and power of expression that raku can bring.

It started with an example of calculating the spherical law of cosines formula (aka Haversine distance) thanks to the very informative and helpful movable type website. Code examples are based on the Rosetta Code variations.

As set out mathematically the formula to be calculated is:

a = sin²(Δφ/2) + cos φ1 ⋅ cos φ2 ⋅ sin²(Δλ/2)
d = 2 ⋅ R ⋅ asin( √a )

The example code in Java is:

public class Haversine {
    public static final double R = 6372.8; // In kilometers
    public static double haversine(double lat1, double lon1, double lat2, double lon2) {
        double dLat = Math.toRadians(lat2 - lat1);
        double dLon = Math.toRadians(lon2 - lon1);
        lat1 = Math.toRadians(lat1);
        lat2 = Math.toRadians(lat2);
 
        double a = Math.pow(Math.sin(dLat / 2),2) + Math.pow(Math.sin(dLon / 2),2) * Math.cos(lat1) * Math.cos(lat2);
        double c = 2 * Math.asin(Math.sqrt(a));
        return R * c;
    }
    public static void main(String[] args) {
        System.out.println(haversine(36.12, -86.67, 33.94, -118.40));
    }
}

In contrast, the decomposition in Raku is, the “client” ie. a method …

method haversine-dist(Position $p) {
        my \Δ = $.Δ( $p );

        my $a = sin(Δ.φ / 2)² + 
                sin(Δ.λ / 2)² * cos($.φ) * cos($p.φ);

        my $value = 2 * R * asin(sqrt($a));

        Distance.new( :$value, :units<m> )   
}   

… which is aided by the “server” ie. a class with class attributes and helper accessor methods that help partition the problem cleanly and deliver a new level of clarity, code reuse and maintainability.

class Position is export {
	has Latitude  $.lat;
	has Longitude $.long;

	# accessors for radians - φ is latitude, λ is longitude 
	method φ { +$.lat  * π / 180 }
	method λ { +$.long * π / 180 }

        # get delta Δ between two Positions
	method Δ( $p ) {
		Position.new( ($p.lat - $.lat), ($p.long - $.long) )
	}
        #...
}

So hopefully, this example illustrates how a thoughtful partitioning of “client-server” code can drive conceptual clarity and productivity. Raku helps by providing:

This is what I love about raku, the language that tries to fade away and to leave the resulting code speak for itself and reflect the original problem domain. In some ways the raku formulation is superior even to the original mathemetical formula.

I am very excited to learn of recent initiatives as reported in the last edition of Rakudo Weekly News 2021.04 Grant Reporting about the work by Vadim Belman on A New Release Of Cro::RPC::JSON.

This has the potential to functionally partition a program across client and server systems while retaining a consistent set of class definitions, data models and patterns with minimal remote method call overhead. I can even imagine a cadre of data scientists using this technology to create, share and exploit raku code as domain-specific programmers.

And, back to the title of this blog post, “making the easy things easy and the hard things possible”. Here’s a way to say that in raku, a Junction

enum Names <Easy Hard>;           #Map.new((Easy => 0, Hard => 1))
my \raku = Easy | Hard;           #any(Easy, Hard)
raku.^name;                       #Junction

~p6steve

Andrew Shitov: Computing factorials using Raku

Published by Andrew Shitov on 2021-01-31T18:19:33

In this post, I’d like to demonstrate a few ways of computing factorials using the Raku programming language.

1 — reduction

Let me start with the basic and the most effective (non necessarily the most efficient) form of computing the factorial of a given integer number:

say [*] 1..10; # 3628800

In the below examples, we mostly will be dealing with the factorial of 10, so remember the result. But to make the programs more versatile, let us read the number from the command line:

unit sub MAIN($n);

say [*] 1..$n;

To run the program, pass the number:

$ raku 00-cmd.raku 10
3628800

The program uses the reduction meta-operator [ ] with the main operator * in it.

You can also start with 2 (you can even compute 0! and 1! this way).

unit sub MAIN($n);

say [*] 2..$n;

2 — for

The second solution is using a postfix for loop to multiply the numbers in the range:

unit sub MAIN($n);

my $f = 1;
$f *= $_ for 2..$n;

say $f;

This solution is not that expressive but still demonstrates quite a clear code.

3 — map

You can also use map that is applied to a range:

unit sub MAIN($n);

my $f = 1;
(2..$n).map: $f *= *;

say $f;

Refer to my article All the stars of Perl 6, or * ** * to learn more about how to read *= *.

4 — recursion

Let’s implement a recursive solution.

unit sub MAIN($n);

sub factorial($n) {
    if $n < 2 {
        return 1;
    }
    else {
        return $n * factorial($n - 1);
    }
}

say factorial(n);

There are two branches, one of which terminates recursion.

5 — better recursion

The previous program can be rewritten to make a code with less punctuation:

unit sub MAIN($n);

sub factorial($n) {
    return 1 if $n < 2;
    return $n * factorial($n - 1);
}

say factorial($n);

Here, the first return is managed by a postfix if, and the second return can only be reached if the condition in if is false. So, neither an additional Boolean test nor else is needed.

6 — big numbers

What if you need to compute a factorial of a relatively big number? No worries, Raku will just do it:

say [*] 1..500;

The speed is more than acceptable for any practical application:

raku 06-long-factorial.raku  0.14s user 0.02s system 124% cpu 0.127 total

7 — small numbers

Let’s try something opposite and compute a factorial, which can fit a native integer:

unit sub MAIN($n);

my int $f = 1;
$f *= $_ for 2..$n;

say $f;

I am using a for loop here, but notice that the type of $f is a native integer (thus, 4 bytes). This program works with the numbers up to 20:

$ raku 07-int-factorial.raku 20
2432902008176640000

8 — sequence

The fun fact is that you can add a dot to the first program 🙂

unit sub MAIN($n);

say [*] 1 ... $n;

Now, 1 ... $n is a sequence. You can start it with 2 if you are not planning to compute a factorials of 0 and 1.

9 — reversed sequence

Unlike the solution with a range, it is possible to swap the ends of the sequence:

unit sub MAIN($n);

say [*] $n ... 1;

10 — sequence with definition

Nothing stops us from defining the elements of the sequence with a code block. The next program shows how you do it:

unit sub MAIN($n);

my @f = 1, * * ++$ ... *;
say @f[$n];

This time, the program generates a sequence of factorials from 1! to $n!, and to print the only one we need, we take the value from the array as @f[$n]. Notice that the sequence itself is lazy and its right end is undefined, so you can’t use @f[*-1], for example.

The rule here is * * ++$ (multiply the last computed value by the incremented index); it is using the built-in state variable $.

11 — multi functions

The idea of the solutions 4 and 5 with two branches can be further transformed to using multi-functions:

unit sub MAIN($n);

multi sub factorial(1)  { 1 }
multi sub factorial($n) { $n * factorial($n - 1) }

say factorial($n);

For the numbers above 1, Raku calls the second variant of the function. When the number comes down to 1, recursion stops, because the first variant is called. Notice how easily you can create a variant of a function that only reacts to the given value.

12 — where

The previous program loops infinitely if you try to set $n to 0. One of the simplest solution is to add a where clause to catch that case too.

unit sub MAIN($n);

multi sub factorial($n where $n < 2)  { 1 }
multi sub factorial($n) { $n * factorial($n - 1) }

say factorial($n);

13 — operator

Here’s another classical Raku solution: modifying its grammar to allow mathematical notation $n!.

unit sub MAIN($n);

sub postfix:<!>($n) {
    [*] 1..$n
}

say $n!;

14 — methodop

A rarely seen Raku’s feature called methodop (method operator) that allows you to call a function as it if was a method:

unit sub MAIN($n);

sub factorial($n) { 
    [*] 1..$n
}

say $n.&factorial;

15 — cached

Recursive solutions are perfect subjects for result caching. The following program demonstrates this approach.

unit sub MAIN($n);

use experimental :cached;

sub f($n) is cached {
    say "Called f($n)";
    return 1 if $n < 2;
    return $n * f($n - 1);
}

say f($n div 2);
say f(10);

This program first computes a factorial of the half of the input number, and then of the number itself. The program logs all the calls of the function. You can clearly see that, say, the factorial of 10 is using the results that were already computed for the factorial of 5:

$ raku 15-cached-factorial.raku 10
Called f(5)
Called f(4)
Called f(3)
Called f(2)
Called f(1)
120
Called f(10)
Called f(9)
Called f(8)
Called f(7)
Called f(6)
3628800

Note that the feature is experimental.

16 — triangular reduction

The reduction operator that we already used has a special variant [\ ] that allows to keep all the intermediate results. This is somewhat similar to using a sequence in the example 10.

unit sub MAIN($n);

my @f = [\*] 1..$n;

say @f[$n - 1];

17 — division of factorials

Now a few programs that go beyond the factorials themselves. The first program computes the value of the expression a! / b!, where both a and b are integer numbers, and a is not less than b.

The idea is to optimise the solution to skip the overlapping parts of the multiplication sequences. For example, 10! / 5! is 6 * 7 * 8 * 9 * 10.

To have more fun, let us modify Raku’s grammar so that it really parses the above expression.

unit sub MAIN($a, $b where $a >= $b);

class F {
    has $.n;    
}

sub postfix:<!>(Int $n) {    
    F.new(n => $n)
}

sub infix:</>(F $a, F $b) { 
    [*] $b.n ^.. $a.n
}

say $a! / $b!;

We already have seen the postfix:<!> operator. To catch division, another operator is defined, but to prevent catching the division of data of other types, a proxy class F is introduced.

To keep proper processing of expression such as 4 / 5, define another / operator that catches things which are not F. Don’t forget to add multi to both options. The callsame built-in routine dispatches control to built-in operator definitions.

. . .

multi sub infix:</>(F $a, F $b) { 
    [*] $b.n ^.. $a.n
}

multi sub infix:</>($a, $b) {
    callsame
}

say $a! / $b!;
say 4 / 5;

18 — optimisation

Let’s try to reduce the number of multiplications. Take a factorial of 10:

10 * 9 * 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1

Now, take one number from each end, multiply them, and repeat the procedure:

10 * 1 = 10
 9 * 2 = 18
 8 * 3 = 24
 7 * 4 = 28
 6 * 5 = 30

You can see that every such result is bigger than the previous one by 8, 6, 4, and 2. In other words, the difference reduces by 2 on each iteration, starting from 10, which is the input number.

The whole program that implements this algorithm is shown below:

unit sub MAIN(
    $n is copy where $n %% 2 #= Even numbers only
);

my $f = $n;

my $d = $n - 2;
my $m = $n + $d;

while $d > 0 {
    $f *= $m;
    $d -= 2;
    $m += $d;
}

say $f;

It only works for even input numbers, so it contains a restriction reflected in the where clause of the MAIN function. As homework, modify the program to accept odd numbers too.

19 — integral

Before wrapping up, let’s look at a couple of exotic methods, which, however, can be used to compute factorials of non-integer numbers (or, to be stricter, to compute what can be called extended definition of it).

The proper way would be to use the Gamma function, but let me illustrate the method with a simpler formula:

An integral is a sum by definition, so let’s make a straightforward loop:

unit sub MAIN($n);

my num $f = 0E0;
my num $dx = 1E-6;
loop (my $x = $dx; $x <= 1; $x += $dx) {
    $f += (-log($x)) ** $n;
}

say $f * $dx;

With the given step of 1E-6, the result is not that exact:

$ raku 19-integral-factorial.raku 10
3086830.6595557937

But you can compute a ‘factorial’ of a floating-point number. For example, 5! is 120 and 6! is 720, but what is 5.5!?

$ raku 19-integral-factorial.raku 5.5
285.948286477563

20 — another formula

And finally, the Stirling’s formula for the rescue. The bigger the n, the more correct is the result.

The implementation can be as simple as this:

unit sub MAIN($n);

# τ = 2 * π
say (τ * $n).sqrt * ($n / e) ** $n;

But you can make it a bit more outstanding if you have a fixed $n:

say sqrt(τ * 10) * (10 / e)¹⁰;

* * *

And that’s it for now. You can find the source code of all the programs shown here in the GitHub repository github.com/ash/factorial.

vrurg: Re-release Of Cro::RPC::JSON

Published by Vadim Belman on 2021-01-28T00:00:00

For a couple of reasons I had to revamp the module and change it in a non-backward compatible way. To avoid bumping api again and because versions 0.1.0 and 0.1.1 contained a couple of serious enough problems, I considered it more reasonable to pull out these versions from CPAN. Not the best solution, of course, but neither one I was fully OK with.

Today I’m releasing version 0.1.2. Aside of version bump this is also the first version and, actually, my first ever module released into zef ecosystem.

The release also got a major new feature. The module now supports JSON-RPC method call authorization. The documentation has it explained.

vrurg: A New Release Of Cro::RPC::JSON

Published by Vadim Belman on 2021-01-22T00:00:00

UPDATE I had to pull out versions 0.1.0 and 0.1.1 from CPAN. See the followup post.

I don’t usually announce regular releases of my modules. But not this time. I start this new year with the new v0.1 branch of Cro::RPC::JSON. Version 0.1.1 is currently available on CPAN (will likely be replaced with fez as soon as it is ready). The release is a result of so extensive changes in the module that I had to bump its :api version to 2.

Here is what it’s all about.

Why Cro::RPC::JSON

Before I tell about the changes, let me briefly introduce the module itself.

The primary purpose of Cro::RPC::JSON is to provide a simple way to use the same API class for both server-side Raku code and for JSON-RPC calls. For example, if we have a class for accessing an inventory database of some kind then it should be easy to use the same class to serve our front-end JavaScript code:

class Inventory {
    ...
    method find-item(Str:D $name) is json-rpc { ... }
    ...
}

That’s all. The only limitation the module currently imposing on the methods is accepting simple arguments and returning JSONifiable values, as supported by JSON::Fast. Marshalling/unmarshalling of parameters/return values is considered, but I’m not certain yet as to how exactly to implement it.

Now, all we need to serve JSON-RPC calls is to add this kind of entry into Cro routes:

# If Inventory is not thread-safe this this line has to be placed inside the
# 'post' block.
my $inventory-actor = Inventory.new;
route {
    ...
    post -> 'api' {
        json-rpc $inventory-actor;
    }
}

And that’s all.

Cro::RPC::JSON also supports code objects as handlers of the RPC requests. But let me omit this part and send you directly to the documentation. Let’s get to the point instead!

WebSockets Support

A long-planned feature I never had enough time to get implemented. But when the life itself demands it in a form of an in-house project, where WebSockets are a natural fit, who am I to disobey the command?

The support comes in a form of implementing both JSON-RPC and asynchronous notifications support by treating socket as a bi-directional stream of JSON objects. JSON-RPC ones are recognized by either presence of jsonrpc key in an object; or by treating a JSON array as a JSON-RPC batch. Because the RPC traffic is prioritized over free-form notifications it means that the latter can only be represented by JSON objects (and without jsonrpc key in them!). No other limitations implied. I don’t consider the constraint as a problem because the primary purpose of non-RPC objects is to support push notifications. For any other kind of traffic it is always possible to open a new RPC-free socket.

The use of our actor/API class with WebSockets is as simple as:

route {
    ...
    get -> 'api' {
        json-rpc :web-socket, $inventory-actor;
    }
}

Note that there is no need to change our Inventory class aside of the already added is json-rpc trait. Same code will work for both HTTP/POST and WebSockets transports! It actually makes it possible to provide both interfaces on the server side by simply having two route entries – one for each case. Which one to use would depend on what kind of optimization developer is willing to achieve. I haven’t had time to benchmark, but common sense tells me that where WebSockets provide less latency and is great for single requests affecting your application response time on user actions; HTTP/POST would serve better on big parallelizable requests. For example, we can fill in a big table by requesting each individual line of it from a server in asynchronous manner over HTTP/POST, allowing the server to process all requests in parallel.

Modes Of Operation

The initial v0.0 branch of the module was strictly about HTTP/POST transport and thus only supported simple “invoke code - get result” mode. The only alternative provided to a user was the choice between using a class or a code object (see, for example, this route entry from Cro::RPC::JSON test suite.

Introduction of asynchronous WebSockets consequently introduced the demand for additional asynchronousity. If we’re about to support push notifications then we have to somehow react to server-side events too, right? Therefore the module now provides three different modes of operation: synchronous, asynchronous, and hybrid. Again, I’d better avoid citing the documentation here. Just single example for our Inventory class:

method subsribe(Str:D $notification) is json-rpc("rpc.on") { ... }
method unsubscribe(Str:D $notification) is json-rpc("rpc.off") { ... }
method on-inventory-update is json-rpc(:async) {
    supply {
        whenever $!database.updated -> $update-event {
            emit %(
                notification => 'ItemUpdate',
                params => %(
                    id => $update-event.item.id,
                    status => $update-event.status,
                ),
            ) if %!subsriptions<ItemUpdate>;
        }
    }
}

Now, all our client-side JavaScript code needs is to call rpc.on JSON-RPC method with “ItemUpdate” as the argument and start listening for incoming events. Any item update in the inventory will now be tracked on the client side automatically.

Roles And Classes

Don’t wanna focus on this, mostly technical fixes like support for parameterized roles, fixed inheritance and role consumption.

A note to myself: don’t you want to document all this?

Error Handling

This was perhaps the biggest nightmare. As it is always with asynchronous code. But, hopefully, I finally got it done right. For synchronous code and class method calls the module should now do all is needed to properly handle any exception leaked from the user code. For JSON-RPC method calls this would mean correct response produced with error key set.

For asynchronous code things are much harder to keep control of. Whereas HTTP/POST allows to do some guesses and provide some additional error processing by Cro::RPC::JSON, for WebSockets any unhandled exception in any async code means socket termination. Being rather new to this technology, I spent a lot of time trying to figure out how to properly respond to a request until realized that actually there is no solution to this problem. The reason is as simple as it only can be: if asynchronous user code thrown then the Supply it provided is actually terminated. Since the supply is an integral part of the whole JSON-RPC/WebSocket processing pipeline, its termination means the pipeline is disrupted.

LAST

It’s now time to dive into the muddy waters of TypeScript/JavaScript. I have already tested Cro::RPC::JSON with Vue framework and rpc-websockets module and it passed the basic tests of calling methods and processing push notifications. Will see where this all eventually takes me. After all, this is gonna be my first production project in Raku.

Andrew Shitov: The course of Raku

Published by Andrew Shitov on 2021-01-13T08:44:00

I am happy to report that the first part of the Raku course is completed and published. The course is available at course.raku.org.

The grant was approved a year and a half ago right before the PerlCon conference in Rīga. I was the organiser of the event, so I had to postpone the course due to high load. During the conference, it was proposed to rename Perl 6, which, together with other stuff, made me think if the course is needed.

After months, the name was settled, the distinction between Perl and Raku became clearer, and, more importantly, external resourses and services, e.g., Rosettacode and glot.io started using the new name. So, now I think it is still a good idea to create the course that I dreamed about a couple of years ago. I started the main work in the middle of November 2020, and by the beginning of January 2021, I had the first part ready.

The current plan includes five parts:

  1. Raku essentials
  2. Advanced Raku subjects
  3. Object-oriented programming in Raku
  4. Regexes and grammars
  5. Functional, concurrent, and reactive programming

It differs a bit from the original plan published in the grant proposal. While the material stays the same, I decided to split it differently. Initially, I was going to go through all the topics one after another. Now, the first sections reveal the basics of some topics, and we will return to the same topics on the next level in the second part.

For example, in the first part, I only talk about the basic data types: IntRatNumStrRangeArrayList, and Hash and basic usage of them. The rest, including other types (e.g., Date or DateTime) and the methods such as @array.rotate or %hash.kv is delayed until the second part.

Contrary, functions were a subject of the second part initially, but they are now discussed in the first part. So, we now have Part 1 “Raku essentials” and Part 2 “Advanced Raku topics”. This shuffling allowed me to create a liner flow in such a way that the reader can start writing real programs already after they finish the first part of the course.

I must say that it is quite a tricky task to organise the material without backward links. In the ideal course, any topic may only be based on the previously explained information. A couple of the most challenging cases were ranges and typed variables. They both causes a few chicken-and-egg loops.

During the work on the first part, I also prepared a ‘framework’ that generates the navigation through the site and helps with quiz automation. It is hosted as GitHub Pages and uses Jekyll and Liquid for generating static pages, and a couple of Raku programs to automate the process of adding new exercises and highlighting code snippets. Syntax highlighting is done with Pygments.

Returning the to course itself, it includes pages of a few different types:

The quizzes were not part of the grant proposal, but I think they help making a better user experience. All the quizzes have answers and comments. All the exercises are solved and published with the comments to explain the solution, or even to highlight some theoretical aspects.

The first part covers 91 topics and includes 73 quizzes and 65 exercises (with 70 solutions :-). There are about 330 pages in total. The sources are kept in a GitHub repository github.com/ash/raku-course, so people can send pull requiest, etc.

At this point, the first part is fully ready. I may slightly update it if the following parts require additional information about the topics covered in Part 1.

This text is a grant report, and it is also (a bit modified) published at https://news.perlfoundation.org/post/rakucourse1 on 13 January 2021.

p6steve: Raku Performance and Physics::Unit

Published by p6steve on 2021-01-03T22:32:22

I have been able to spend some time on the Physics::Unit module over the holidays and to expunge some of the frustrations that have crept in regarding the compile times of raku.

The basic problem I have been wrestling with is the desire to express physical SI units using the raku custom postfix operator mechanism without having to wait for 10 mins for raku to compile my module.

This is the story of how judicious design, lazy execution, trial & error and the raku power tools got from 10 mins+ to under 13 secs!

Let’s start by looking at the sunlit uplands. Imagine a raku which provides a simple and intuitive tool for scientists and educators to perform calculations that automatically figure out what the physical units are doing.

Something like this:

use Physics::Constants;
use Physics::Measure :ALL;

$Physics::Measure::round-to = 0.01;

my \λ = 2.5nm; 
my \ν = c / λ;  
my \Ep = ℎ * ν;  

say "Wavelength of photon (λ) is " ~λ;              #2.5 nm
say "Frequency of photon (ν) is " ~ν.norm;          #119.92 petahertz 
say "Energy of photon (Ep) is " ~Ep.norm;           #79.46 attojoule

Now, before you ask, yes – this is real code that works and compiles in a few seconds. It uses the latest version Physics::Measure module which in turn uses the Physics::Units module. Let me point out a few cool things about how raku’s unique combination of features is helping out:

So – how can it be that hard? Well the devil is in the little word all [the SI unit postfix operators]. Consider this table:

So we have 27 units and 20 prefixes – that’s, err, 540 combinations. And you want me to import all of these into my namespace. And you want me to have a library of 540 Physics::Unit types that get loaded when I use the postfix. Have you thought this through!!??

So – by way of sharing the pain of this part of my raku Physics::Journey – here are the lessons I have learned to optimise my module code:

Attempt 1 – Ignore It

My first instinct was to punt on the issue. The original perl5 Physics::Unit module allows coders to specify a unit type via a string expression – something like this:

my $u2 = GetUnit( 'kg m^2 / s^2' );

Anyway I knew I would need unit expressions to cope with textual variants such as ‘miles per hour’ or ‘mph’, or ‘m/s’, ‘ms^-1’, ‘m.s-1’ (the SI derived unit representation) or ‘m⋅s⁻¹’ (the SI recommended string representation, with superscript powers). So a new unit expression parser was built into Physics::Unit from the start with raku Grammars. However, it became apparent that saying:

my $l = Length.new( value => 42, units => 'yards' );

Is a pretty long-winded way to enter each measurement. Still, this was a cool way to apply (and for me to learn) raku Grammars and Actions which has resulted in a flexible, human-friendly unit expression slang as a built-in piece of the Physics::Unit toolkit.

Attempt 2 – Working but Dog Slow

So far, my Physics::Unit module would happily take a unit string, parse it with the UnitGrammar and create a suitable instance of a Unit object. Something like this:

Unit.new( factor => 0.00016631, offset => 0, 
    defn => 'furlong / fortnight', 
    type => Speed, dims => [1,0,-1,0,0,0,0,0], 
    dmix => ("fortnight"=>-1,"furlong"=>1).MixHash, names => ['ff'] );

This user-defined object is generated by iterating over it’s roots (e.g.) 1 fortnight => 2 weeks => 14 days => 336 hours => 2,016 mins => 120,960 secs (thus the factor attribute). More than 270 built in unit and prefix definitions – covering SI, US (feet, inches), Imperial (pints, gallons) and so on. And the .in() method is used for conversions. [There does not seem much point in a Unit library unless it can support common usage such as mph and conversion between this and the formal SI units.]

But, now I come to implement my postfix operators – then I need to pass 540 definitions to the Grammar on first compilation and it needs to build 540 object instances. Welcome to 10 mins+ compile times.

Before I go too far with this critique – I would like to note a couple of very important caveats:

  1. “So finally, we have an actual Perl 6 that can compete with all those languages too, at least in terms of features. One of the ways it does not yet compete with Perl 5 is in the area of performance, but the first half of this year we’ve already got it running nearly twice as fast as it was at Christmas. We’ve still got a lot of headroom for optimization. But Perl 6 has its eye on surpassing all those other languages eventually. Perl has always been about raising the bar, then raising it again, and again. ” Larry Wall on Slashdot in 2016 … and optimisations and enhancements are coming all the time.
  2. Raku recompilation is a very big speed multiplier – even with 30 min compile times, the precompiled binary loaded and ran in about 12 seconds.
  3. Personally I echo the view that raku is a very high level language and that the ensuing programmer productivity benefits in precision and expression outweigh a few seconds of compile time. I am confident that the hardware will continue to improve and will soon eliminate any noticeable delay – for example the recent Apple M1 launch.

Attempt 3 – Stock Unit Blind Alley

Sooo – the third attempt to combine the desired features and reasonable speed was to pre-generate the 540 units as literals – “Stock” units. So the code could be run in “dump” mode to generate the unit literals using the Grammar and store to a text file, then paste them back into the module source so that in the released version they are just read in via a “fast start” mode.

By reusing the same Grammar for pre-generation and on the fly generation of user-defined Units, this eliminated any potential compatibility issues. I chose not to compromise with any of the MONKEY-SEE-NO-EVAL for module security and code integrity reasons.

Performance improvements were dramatic. By bypassing the Grammar step, first compile times came down to ~340s and precompile start times to under 3s. Nevertheless, I was still unhappy to release a module with such a slow first compile time and searched for a better design.

Attempt 4 – Lazy Instantiation

On the fourth pass, I decided to re-factor the module with lazy Unit instantiation. Thus the unit definitions are initialised as hash data maps, but only a handful of objects are actually created (the base units and prefixes).

Then, when a new object is called for, it is generated “lazily” on demand. Even in an extreme case such as the ‘furlong / fortnight’ example, only O(10) objects are created.

By eliminating the Stock units, this reduced the module source by 2000+ lines (540 x 4 lines per object literal). Performance improved substantially again – this time about 60s first compile and 2.6s precomp start times.

However, the Physics::Measure code still had to embody the postfix operators and to export them to the user program. Thus 540 lines to individually compile. Each postfix declaration like this:

sub postfix:<m> ( Real:D $x ) is export { do-postfix( $x, 'm' ) }

Attempt 5 – UNIT::EXPORT package

Even better, I could learn from the excellent raku documentation again – this time to discover UNIT::EXPORT that gave me a good start to produce a programmatic export all 540 postfixes in just 6 lines of code. Goodbye boilerplate!

my package EXPORT::ALL {
  for %affix-by-name.keys -> $u {
    OUR::{'&postfix:<' ~ $u ~ '>'} := 
                    sub (Real:D $x) { do-postfix($x,"$u") };
  }   
}

This had the additional performance boost – the final numbers below…

Attempt 6 – Selective Import

Finally, a word on selective import. Prior to Attempt 5, I experimented with labelling the less common units (:DEFAULT, :electrical, :mechanical, :universal and :ALL if you are interested). But even by reducing :DEFAULT to only 25% of the total, this did not reduce the first compile time measurably. I interpret this as the compiler needing to process all the lines even if the import labels are not specified by the user program.

But with the package approach, Physics::Measure :ALL will export all of the SI units. Just drop the :ALL label if you want even more speed and plan to go without the postfix operators.

Final Outcome

So, the latest speed measurements (on my low spec 1.2GHz/8GB laptop) are:

# use Physics::Measure;      ...10s first-, 1.2s pre- compiled
# use Physics::Measure :ALL; ...13s first-, 2.8s pre- compiled

YMMV!

~p6steve (pronounced p-sics)

Jo Christian Oterhals: What did we learn from an astronomer’s hacker hunt in the 80's? Apparently, not too much

Published by Jo Christian Oterhals on 2020-12-29T19:55:31

C omputer security has seen its share of mind-boggling news lately. None more mind boggling than the news about how alleged Russian hackers installed a backdoor into the IT monitoring product Solarwind Orion. Through this they got got entrance into the computer systems of several US agencies and departments — ironically even into the systems of a cyber security company (Fireeye) and Microsoft itself . The news made me think of my own history with computer security, and down memory lane I went.

One particular day in late July or early August 1989 my parents, sister and me were driving home from a short summer vacation. At a short stop in a largish city, I had found a newsstand carrying foreign magazines. There I’d bought a copy of PC/Computing’s September issue (to this day I don’t understand why American magazines are on sale a couple of months before the cover date) so that I had something to make time in the backseat pass faster.

Among articles about the relatively new MS-DOS replacement OS/2 (an OS co-developed with IBM that Microsoft would come to orphan the minute they launched Windows 3.0 and understood the magnitude of the success they had on their hands) and networking solutions from Novell (which Windows would kill as well, albeit more indirectly), the magazine brought an excerpt of the book “The Cuckoo’s Egg” by a guy named Clifford Stoll. Although I had bought the magazine for the technical information such as the stuff mentioned above, this excerpt stood out. It was a mesmerising story about how an astronomer-turned-IT-guy stumbled over a hacker, and how he, aided by interest but virtually no support from the FBI, CIA and NSA, almost single handedly traced the hacker’s origins back to a sinister government sponsored organisation in the then communist East Germany.

This is the exact moment I discovered that my passion — computers — could form the basis of a great story.

Coastal Norway where I grew up is probably as far from the author’s native San Francisco as anything; at least our book stores were. So it wasn’t until the advent of Amazon.com some years later that I was able to order a copy of the book. Luckily, the years passed had not diminished the story. Granted, the Internet described by the author Clifford Stoll was a little more clunky than the slightly more modern dial-up internet I ordered the book on. But subtract the World Wide Web from the equation and the difference between his late eighties internet and my mid-nineties modem version weren’t all that big. My Internet was as much a monochrome world of telnet and text-only servers as it was a colourful web. Email, for instance, was something I managed by telnetting into a HP Unix server and using the command line utility pine to read and send messages.

What struck me with the story was that the hacker’s success very often was enabled by sloppy system administration; one could arguably say that naïve or ignorant assumptions by sysadmins all across the US made the hack possible. Why sloppy administration and ignorant assumptions? Well, some of the reason was that the Internet was largely run by academia back then. Academia was (and is) a culture of open research and sharing of ideas and information. As such it’s not strange that sysadmins of that time assumed that users of the computer systems had good intentions too.

But no one had considered that the combination of several (open) sources of information and documents stored on these servers, could end in very comprehensive insight into, say, the Space Shuttle program or military nuclear research. Actually, the main downside to unauthorised usage had to do with cost: processing power was expensive at the time and far from a commodity. Users were billed by for their computer usage. So billing was actually the reason why Stoll started his hacker hunt. There was a few cents worth of computer time that couldn’t be accounted for. Finding out whether this was caused by a bug or something else, was the primary goal of Mr. Stoll’s hunt. What the hacker had spent this time on was — at first — a secondary issue at best.

With that in mind it’s maybe not so strange that one of the most common errors made was not changing default passwords on multi-user computers connected to the internet. One of the systems having a default password was the now virtually extinct VAX/VMS operating system for Digital’s microcomputer series VAX. This was one of the things Mr. Stoll found out by logging each and every interaction the hacker, using Stoll’s compromised system as a gateway, had with other systems (the description of how he logged all this by wiring printers up to physical ports on the microcomputer, rewiring the whole thing every time the hacker logged on through another port, is by itself worth reading the book for). Using the backdoor, the hacker did not only gain access to that computer — they got root privileges as well.

In the 30+ years passed since I read the book I’ve convinced myself about two things: 1) we’ve learned to not use default passwords anymore, and 2) that VMS systems exhibiting this kind of backdoor are long gone.

Well, I believed these things until a few weeks ago. That’s when I stumbled on to a reddit post — now deleted, but there still is a cached version available on Waybackmachine. Here the redditor explained how he’d identified 33 remaining VAX/VMS systems still on the Internet:

About a year ago I read the book “A Cuckoo’s Egg”, written in 1989. It included quite a bit of information pertaining to older systems such as the VAX/VMS. I went to Censys (when their big data was still accessible easily and for free) and downloaded a set of the telnet (port 23) data. A quick grep later and I had isolated all the VAX/VMS targets on the net. Low and behold, of the 33 targets (I know, really scraping the bottom of the barrel here) more than half of them were still susceptible to default password attacks literally listed in a book from almost 3 decades ago. After creating super user accounts I contacted these places to let them know that that they were sporting internet facing machines using default logins from 1989. All but one locked me out. This is 31 years later… The future will be a mess, kids

I applaud the redditor that discovered this. Because isn’t what he found a testament of something breathtakingly incompetent and impressive at the same time? Impressive in the sense that someone’s been able to keep these ancient systems alive on the internet for decades; incompetent because the sysadmins has ignored patching the most well documented security flaw of those systems for well over a quarter century?

So maybe this starts to answer question posed in the title: Did we learn anything from this?

Yes, of course we did. If we look past the VMS enthusiast out there, computer security is very different now than back then. Unencrypted communication is almost not used anywhere anymore. Security is provided by multilayered hardware and software solutions. In addition are not only password policies widely enforced on users, but two-factor and other extra layers of authentication is used as well.

But the answer is also No. While my organisations such as my workplace — which is not in the business of having secrets — has implemented lots of the newest security measures, this autumn we learned that the Norwegian parliament — which is in the business of having secrets — haven’t. They had weak password policies and no two-factor authentication for their email system.

Consequently they recently became an easy target for Russian hackers. I obviously don’t know what was the reasoning behind having weak security implemented. But my guess is that the IT department assessed the digital competence of the parliament members and concluded that it was too low for them to handle strong passwords and managing two-factor authentication.

And this is perhaps the point where the security of yesteryear and security today differs the most: As we’re closing in on 2021, weak security is a conscious choice; but it is the same as leaving the door wide open, and any good sysadmin knows it.

The ignorance exhibited in the case of the Norwegian parliament borders, in my opinion, on criminal ignorance — although I guess no one will ever have to take the consquence. What it does prove, however, is that while systems can be as good as anything, people are still the weakest link in any such system.

In sum I think my answer to the initial question is an uneasy Maybe. We still have some way to go before what Cliff Stoll taught us 32 years ago has become second nature.

vrurg: A Good Word About JavaScript

Published by Vadim Belman on 2020-12-29T00:00:00

What I do like about JavaScript is how elegant many of their solutions to own f*ckups!

vrurg: On Coercion Method Return Value

Published by Vadim Belman on 2020-12-26T00:00:00

As Wenzel P.P. Peppmeyer continues with his great blog posts about Raku, he touches some very interesting subjects. His last post is about implementing DWIM principle in a module to allow a user to care less about boilerplate code. This alone wouldn’t make me writing this post but Wenzel is raising a matter which I expected to be a source of confusion. And apparently I wasn’t mistaken about it.

Here is the quote from the post:

The new coercion protocol is very useful but got one flaw. It forces me to return the type that contains the COERCE-method. In a role that doesn’t make much sense and it forces me to juggle with IO::Handle.

Basically, the claim winds down to the following snippet:

role Foo {
    multi method COERCE(Str:D $s) {
        $s.IO.open: :r
    }
}
sub foo(Foo() $v) {
    say $v.WHICH;
}
foo($?FILE);

Trying to run it one will get:

Impossible coercion from 'Str' into 'Foo': method COERCE returned an instance of IO::Handle

Let’s see why the error is legitimate here.

The short answer would be: coercion, though relaxed in a way, uppermost is a type constraint.

The longer answer is: the user expects Foo in $v in sub foo. I propose to do a little thought experiment which involves adding a method to the role Foo. For example, we want to pad text in the file with spaces. For this we implement method shift-right(Int:D $columns) {...} in role Foo. Then we use the method in our sub foo:

sub foo(Foo() $handle) {
    ...
    $handle.shift-right(4);
    ...
}

Do I need to elaborate on what’s gonna happen when $handle is not Foo?

Here is the version of the role as I would do it:

subset Pathish of Any:D where Str | IO::Handle;

role Filish[*%mode] is IO::Handle {
    multi method COERCE(IO:D(Pathish) $file) {
        self.new(:path($file)).open: |%mode
    }
}

sub prep-file( Filish[:r, :!bin]() $h, 
               Str:D $pfx ) 
{
    $h.lines.map($pfx.fmt('%-10s: ') ~ *)».say;
}

prep-file($?FILE, "Str");
prep-file($?FILE.IO, "IO");
prep-file($?FILE.IO.open(:a), "IO::Handle");

Note the use of coercion to implement coercion. The idea is to take anything, what could be turned into an IO instance.

Also, I forcibly re-open any IO::Handle because the source could have different open modes from what I expect to be the result of the coercion. In my example I’m intentionally passing a handle opened in append mode into a sub expecting a read handle.

I’d like to finish with a note that here we have a good example of Raku allowing DWIM code semantics without breaking its predictability.

vrurg: Runtime vs. Compilation, Or Reply \#2

Published by Vadim Belman on 2020-12-26T00:00:00

The friendly battle continues with the next post from Wenzel where he considers different cases where strict typechecking doesn’t work. The primary point he makes is that many decisions Raku makes are run-time decisions and this is where static typechecking doesn’t work well. This is true. But this doesn’t change the meaning of my previous post.

First of all, let’s make clear difference between compile time decisions and run-time decisions. Basically, when we write something like:

sub foo(Int:D $x) {...}

semantically it means:

sub foo($x) { 
    die "oopsie!" 
        unless $x ~~ Int && $x.defined; 
    ... 
}

There is nothing magical in it. The difference between the two snippets is apparently about the performance as compiler can do much more optimizations related to static typing of the first case, than it is able with the second variant. This is pretty much clear. What’s not is that if we declare a parameterized role we often end up with run-time code produced by the compiler.

role R1[::T] {
    sub foo(T $x) { ... }
    sub bar(Int:D $x) { ... }
}

Here foo will do a lot of extra work at runtime because the compiler doesn’t know what type $x will have. So, when it comes to:

class C1 does R1[Str(Int:D)] { ... }

Then something like C1.bar(pi) will throw after a simple pi ~~ Int:D check. But C1.foo(pi) case will result in the signature binding code to do extra steps to resolve T, and then a few more operations before actually throwing a bad parameter type exception.

So, eventually, where one would expect things to be done at compile time, they’re not. Hopefully, this is a good example of the dualistic nature of Raku which is balancing between static and dynamic approaches.

Let’s see what it leads us to.

Submethods As Non-inheritable Properties

Though in general Raku design tries to stick to Liskov substitution principle, submethods are a special case which breaks it intentionally. Anyone utilizing a submethod must remember this. Moreover, I’d say that a submethod must not be invoked directly without a really good reason to do so! If one do call a submethod they must either make sure that call is done on the submethod’s class, or use .? operator to prevent their code from throwing:

class Foo {
    submethod foo { ... }
}

class Bar is Foo { }

sub foo(Foo $v) {
    with $v.?foo {
        ...
    }
}

sub bar(Foo $v) {
    if $v.WHAT === Foo {
        $v.foo;
    }
}

To my own point of view, the most useful use case for submethods is iteration over a class’ MRO order to call submethods on classes where they’re defined. There is a special method on Mu which implements this behavior – WALK. It is not documented yet, unfortunately. But it’s specced. Partially, its functionality is implemented with .+ and .* operators.

FALLBACK

I’d rather skip this case. Except for one note to be made: somehow it reminds me about the ways of TypeScript when it comes to type matching. I.e. we’d need to match an object’s content against our constrains.

Anyway, FALLBACK implementation is so much about runtime processing that I see no problem here whatsoever. Moreover, I’d rather avoid this kind of design pattern in a production code, unless it is tightly wrapped in a very small and perfectly documented module.

Role Or Class As a Function

This last case is perhaps the most interesting one because here what we can do about it right away:

subset Pathish of Any:D where Str | IO::Handle;

role Filish[*%mode] is IO::Handle {
    multi method COERCE(IO:D(Pathish) $file) {
        self.new(:path($file)).open: |%mode
    }

    multi method CALL-ME(Pathish:D $file) {
        IO::Handle.new(:path($file)).open: |%mode
    }
}

sub prep-file(Filish[:r, :!bin]() $h, Str:D $pfx) {
    $h.lines.map($pfx.fmt('%-10s: ') ~ *)».say;
}

sub prep-file2($path, Str:D $pfx) {
    my $h = Filish[:r, :!bin]($path);
    $h.lines.map($pfx.fmt('%-10s: ') ~ *)».say;
}

prep-file($?FILE, "Str");
prep-file($?FILE.IO, "IO");
prep-file($?FILE.IO.open(:a), "IO::Handle");
prep-file2($?FILE, "Str-2");

This is slightly extended example from the previous post. I have only added CALL-ME method and prep-file2 sub. Apparently, the only significant difference with Wenzel’s code snippet is that invocation of Filish has been moved from the signature into the function body. I think this is perfectly OK because one way or another it’s a runtime thingie.

LEAVE {}

Just to sum up the above written, Wenzel is right when he says that coercion is about static type checking. It indeed is. For this reason it ought to be strict because this is what we expect it to be.

It is also true that there’re cases where only run-time checks make it possible to ensure that the object we work with conforms to our requirements. And this is certainly not where coercion comes into mind. This is a field of dynamic transformations where specialized routines is what we need.

Raku Advent Calendar: Day 25: Reminiscence, refinement, revolution

Published by jnthnwrthngtn on 2020-12-25T01:01:00

By Jonathan Worthington

Raku release reminiscence

Christmas day, 2015. I woke up in the south of Ukraine – in the very same apartment where I’d lived for a month back in the spring, hacking on the NFG representation of Unicode. NFG was just one of the missing pieces that had to fall into place during 2015 in order for that Christmas – finally – to bring the first official release of the language we now know as Raku.

I sipped a coffee and looked out onto a snowy courtyard. That, at least, was reassuring. Snow around Christmas was relatively common in my childhood. It ceased to be the year after I bought a sledge. I opened my laptop, and took a look at the #perl6-dev IRC channel. Release time would be soon – and I would largely be a spectator.

My contributions to the Rakudo compiler had started eight years prior. I had no idea what I was getting myself into, although if I had known, I’m pretty sure I’d still have done it. The technical challenges were, of course, fascinating for somebody who had developed a keen interest in languages, compilers, and runtimes while at university. Larry designs languages with an eye on what’s possible, not on what’s easy, for the implementer. I learned, and continue to learn, a tremendous amount by virtue of working on Raku implementation. Aside from that, the regular speaking at workshops and conferences opened the door to spending some years as a teacher of software development and architecture, and after that left me with a wealth of knowledge to draw on as I moved on to focus on consultancy on developer tooling. Most precious of all, however, are the friendships forged with some of those sharing the Raku journey – which I can only imagine lasting a lifetime.

Eight years had gone by surprisingly quickly. When one is involved with the day-to-day development, the progress is palpable. This feature got done, this bug got fixed, this design decision got made, this library got written, this design decision got re-made for the third time based on seeing ways early adopters stubbed their toe, this feature got re-implemented for the fourth time because of the design change… From the outside, it all looks rather different; it’s simply taking forever, things keeping getting re-done, and the developers of it “have failed us all” (all seasons have their grinches).

Similarly, while from the outside the Christmas release was “the big moment”, from the inside, it was almost routine. We shipped a Rakudo compiler release, just as we’d been doing every month for years on end. Only this time, we also declared the specification test suite – which constitutes the official specification of the language – as being an official language release. The next month would look much like the previous ones: more bug reports arrive, more things get fixed and improved, more new users show up asking for guidance.

The Christmas release was the end of the beginning. But the beginning is only the start of a story.

Regular Raku refinement

It’s been five years since That Christmas. Time continues to fly by. Each week brings its advances – almost always documented in what’s currently known as the Rakudo Weekly, the successor of the Perl 6 Weekly. Some things seem constant: there’s always some new bug reports, there will always be something that’s under-documented, no matter what you make fast there will always be something else that is still slow, no matter how unlikely it seemed someone would depend on an implementation detail they will have anyway, new Unicode versions require at least some effort, and the latest release of MacOS seemingly always requires some kind of tweak. Welcome to the life of those working on a language implementation that’s getting some amount of real-world use.

Among the seemingly unending set of things to improve, it’s easy to lose sight of just how far the Raku Programming Language, its implementation in Rakudo, and the surrounding ecosystem has come over the last five years. Here I’ll touch on just a few areas worthy of mention.

Maturity

Maturity of a language implementation, its tools, and its libraries, is really, really, hard won. There’s not all that many shortcuts. Experience helps, and whenever a problem can be avoided in the first place, by having somebody about with the background to know to do so, that’s great. Otherwise, it’s largely a case of making sure that when there are problems, they get fixed and something is done to try and avoid a recurrence. It’s OK to make mistakes, but making the same mistake twice is a bit careless.

The most obvious example of this is making sure that all implementation bugs have test coverage before the issue is considered resolved. However, being proactive matters too. Prior to every Rakudo compiler release, the tests of all modules in the ecosystem are run against it. Regressions are noted, and by now can even be automatically bisected to the very commit that caused the regression. Given releases take place every month, and this process can be repeated multiple times a month, there’s a good chance the developer whose work caused the regression will have it relatively fresh in their mind.

A huge number of things have been fixed and made more robust over the last five years, and there are tasks I can comfortably reach to Raku for today that I wouldn’t have five years back. Just as important, the tooling supporting the development and release process has improved too, and continues to do so.

Modules

There are around 3.5x as many modules available as there were five years ago, which means there’s a much greater chance of finding a module to do what you need. The improvement in quantity is easy enough to quantify (duh!), but the increase in quality has also had a significant impact, given that many problems we need to solve draw on a relatively small set of data formats, protocols, and so forth.

Just to pick a few examples, 5 years ago we didn’t have:

Given how regularly I’ve used many of these in my recent work using Raku, it’s almost hard to imagine that five years ago, none of them existed!

An IDE

I was somewhat reluctant to put this in as a headline item, given that I’m heavily involved with it, but the Comma IDE for Raku has been a game changer for my own development work using the language. Granted IDEs aren’t for everyone or everything; if I’m at the command line and want to write a short script, I’ll open Vim, not fire up Comma. But for much beyond that, I’m glad of the live analysis of my code to catch errors, quick navigation, auto-complete, effortless renaming of many program elements, and integrated test runner. The timeline view can offer insight into asynchronous programs, especially Cro services, while the grammar live view comes in handy when writing parsers. While installing an IDE just for a REPL is a bit over the top, Comma also provides a REPL with syntax highlighting and auto-completion too.

Five years ago, the answer to “is there an IDE for Raku” was “uhm, well…” Now, it’s an emphatic yes, and for some folks to consider using the language, that’s important.

Performance

The Christmas release of Raku was not speedy. While just about everyone would agree there’s more work needed in this area, the situation today is a vast improvement on where we were five years ago. This applies at all levels of the stack: MoarVM’s runtime optimizer and JIT have learned a whole bunch of new tricks, calls into native libraries have become far more efficient (to the benefit of all bindings using this), the CORE setting (Raku’s standard library) has seen an incredible number of optimizations, and some key modules outside of the core have received performance analysis and optimization too. Thanks to all of that, Raku can be considered “fast enough” for a much wider range of tasks than it could five years ago.

And yes, the name

Five years ago, Raku wasn’t called Raku. It was called Perl 6. Name change came up now and then, but was a relatively fringe position. By 2019, it had won sufficient support to take place. A year and a bit down the line, I think we can say the rename wasn’t a panacea, but nor could it be considered a bad thing for Raku. While I’ve personally got a lot of positive associations with the name “Perl”, it does carry an amount of historical baggage. One of the more depressing moments for me was when we announced Comma, and then saw many of the comments from outside of the community consist of the same tired old Perl “jokes”. At least in that sense, a fresh brand is a relief. Time will tell what values people will come to attach to it.

Rational Raku revolution

With software, it feels like the ideal time to start working on something is after having already done most of the work on it. At that point, the required knowledge and insight is conveniently to hand, and at least a good number of lessons wouldn’t need to be learned the hard way again.

Alas, we don’t have a time machine. But we do have an architecture that gives us a chance of being able to significantly overhaul one part of the stack at a time, so we can use what we’ve learned. A number of such efforts are now underway, and stand to have a significant impact on Raku’s next five years.

The most user-facing one is known as “RakuAST”. It involves a rewrite of the Rakudo compiler frontend that centers around a user-facing AST – that is, a document object model of the program. This opens the door to features like macros and custom compiler passes. These may not sound immediately useful to the everyday Raku user, but will enable quite a few modules to do what they’re already doing in a better way, as well as opening up some new API design possibilities.

Aside from providing a foundation for new features, the compiler frontend rewrite that RakuAST entails is an opportunity to eliminate a number of long-standing fragilities in the current compiler implementation. This quite significant overhaul is made achievable by the things that need not change: the standard library, the object system implementation, the compiler backend, and the virtual machine.

A second ongoing effort is to improve the handling of dispatch in the virtual machine, by introducing a single more general mechanism that will replace a range of feature-specific optimizations today. It should also allow better optimization of some things we presently struggle with. For example, using deferral via callsame, or where clauses in multiple dispatch, comes with a high price today (made to stand out because many other constructs in that space have become much faster in recent years). The goal is to do more with less – or at least, with less low-level machinery in the VM.

It’s not just the compiler and runtime that matter, though. The recently elected Raku Steering Council stands to provide better governance and leadership than has been achieved in the last few years. Meanwhile, efforts are underway to improve the module ecosystem and documentation.

Today, we almost take for granted much of the progress of the last five years. It’s exciting to think what will become the Raku norm in the next five. I look forward to creating some small part of that future, and especially to seeing what others – perhaps you, dear reader – will create too.

Raku Advent Calendar: Day 24: Christmas-oriented programming, part deux

Published by jjmerelo on 2020-12-24T01:01:00

In the previous installment of this series of articles, we started with a straightforward script, and we wanted to arrive to a sound object-oriented design using Raku.

Our (re)starting point was this user story:

[US1] As a NPCC dean, given I have a list of classrooms (and their capacity) and a list of courses (and their enrollment), I want to assign classrooms to courses in the best way possible.

And we arrived to this script:

my $courses = Course-List.new( "docs/courses.csv");
my $classes = Classroom-List.new( "docs/classes.csv");
say ($classes.list Z $courses.list )
        .map( {  $_.map( { .name } ).join( "\t→\t") }  )
        .join( "\n" );

That does not really cut it, though. Every user story must be solved with a set of tests. But, well, the user story was kinda vague to start with: “in the best way possible” could be anything. So it could be argued that the way we have done is, indeed, the best way, but we can’t really say without the test. So let’s reformulate a bit the US:

[US1] As a NPCC dean, given I have a list of classrooms (and their capacity) and a list of courses (and their enrollment), I want to assign classrooms to courses so that no course is left without a classroom, and all courses fit in a classroom.

This is something we can hold on to. But of course, scripts can’t be tested (well, they can, but that’s another story). So let’s give this script a bit of class.

Ducking it out with lists

Actually, there’s something that does not really cut it in the script above. In the original script, you took a couple of lists and zipped it together. Here you need to call the .list method to achieve the same. But the object is still the same, right? Shouldn’t it be possible, and easy, to just zip together the two objects? Also, that begs the client of the class to know the actual implementation. An object should hide its internals as much as possible. Let’s make that an issue to solve

As a programmer, I want the object holding the courses and classrooms to behave as would a list in a “zipping” context.

Santa rubbed his beard thinking about how to pull this off. Course-List objects are, well, that precise kind of objects. They include a list, but, how can they behave as a list? Also, what’s precisely a list “in a zipping context”.

Long story short, he figured out that a “zipping context” actually iterates over every member of the two lists, in turn, putting them together. So we need to make the objects Iterable. Fortunately, that’s something you can definitely do in Raku. By mixing roles, you can make objects behave in some other way, as long as you’ve got the machinery to do so.

unit role Cap-List[::T] does Iterable;

has T @!list;

submethod new( $file where .IO.e ) {
    $file.IO.lines
            ==> map( *.split( /","\s+/) )
            ==> map( { T.new( @_[0], +@_[1] ) } )
            ==> sort( { -$_.capacity } )
            ==> my @list;
    self.bless( :@list );
}
submethod BUILD( :@!list ) {}

method list() { @!list }

method iterator() {@!list.iterator}

With respect to the original version, we’ve just mixed in the Iterable role and implemented an iterator method, that returns the iterator on the @!list attribute. That’s not the only thing we need for it to be in “a zipping context”, however. Which begs a small digression on Raku containers and binding.

Containers and containees

El cielo esta entablicuadrillado, ¿quién lo desentablicuadrillará? El que lo entablicuadrille, buen entablicuadrillador será. — Spanish tongue twister, loosely translated as “The sky is tablesquarebricked, who will de-trablesquarebrick it? The tablesquarebrickalyer that tablesquaresbricks it, good tablesquarebrickalyer will be.

This is almost tablesquaredwhatever

It’s worth the while to check out this old Advent article, by Zoffix Znet, on what’s binding and what’s assignment in the Raku world. Binding is essentially calling an object by another name. If you bind an object to a variable, that variable will behave exactly the same as the object. And the other way round.

my $courses := Course-List.new( "docs/courses.csv");

We are simply calling the right hand side of this binding by another name, which is shorter and more convenient. We can call any method, and also we can put this “in a zipping context” by calling for on it:

.name.say for $courses;

Will return

Woodworking 101
Toymaking 101
ToyOps 310
Wrapping 210
Ha-ha-haing 401
Reindeer speed driving 130

As you can see, the “zipping context” is exactly the same as the (not-yet-documented) iterable context, which is also invoked (or coerces objects into, whatever you prefer) when used with for. for $courses will actually call $courses.iterator, returning the iterator of the list it contains.

This is not actually a digression, this is totally on topic. I will have to digress, however, to explain what would have happened in the case we would have used normal assignment, as in

my $boxed-courses = Course-List.new( "docs/courses.csv");

Assignment is a nice and peculiar thing in Raku. As the above mentioned article says, it boxes an object into a container. You can’t easily box any kind of thing into a Scalar container, so, Procusto style it needs to fit it into the container in a certain way. But any way you think about it, the fact is that, unlike before, $boxes-courses is not a Course-List object; it’s a Scalar object that has scalarized, or itemized, a Course-List object. What would you need to de-scalarize it? Simply calling the de-cont operator on it, $boxed-courses<>, which unwraps the container and gives you what’s inside.

Scheduler classes

Filling the class in all the wrong places

OK, back to our regular schedule…r.

Again, don’t let’s just try to do things as we see fit. We need to create an issue to fix

Santa is happy to prove such a thing:

use Course-List;
use Classroom-List;

unit class Schedule;

has @!schedule;

submethod new( $courses-file where .IO.e,
               $classes-file where .IO.e) {

    my $courses := Course-List.new($courses-file);
    my $classes := Classroom-List.new($classes-file);
    my @schedule = ($classes Z $courses).map({ $_.map({ .name }) });
    self.bless(:@schedule);
}

submethod BUILD( :@!schedule ) {}

method schedule() { @!schedule }

method gist {
    @!schedule.map( { .join( "\t⇒\t" ) } ).join("\t");
}

Not only it schedules courses, you can simply use it by saying it. It’s also tested, so you know that it’s going to work no matter what. With that, we can close the user story.

But, can we?

Wrapping up with a script

Santa was really satisfied with this new application. He only needed to write this small main script:

use Schedule;

sub MAIN( $courses-file where .IO.e = "docs/courses.csv",
          $classes-file where .IO.e = "docs/classes.csv") {
    say Schedule.new( $courses-file, $classes-file)
}

Which was straight and to the point: here are the files, here’s the schedule. But, besides, it was tested, prepared for the unexpected, and could actually be expanded to take into account unexpected events (what happens if you can’t fit elves into a particular class? What if you need to take into account other constraints, like not filling biggest first, but filling smuggest first? You can just change the algorithm, without even changing this main script. Which you don’t really need:

raku -Ilib -MSchedule -e "say Schedule.new( | @*ARGS )" docs/courses.csv docs/classes.csv

using the command line switches for the library search path (-I) and loading a module ( -M) you can just write a statement that will take the arguments and flatten them to make them into the method’s signature.

Doing this, Santa sat down in his favorite armchair to enjoy a cup of cask-aged eggnog and watch every Santa-themed movie that was being streamed until next NPCC semester started.

Raku Advent Calendar: Day 23: Christmas-oriented design and implementation

Published by jjmerelo on 2020-12-22T22:44:12

Elves graduating from the community college, in front of a Santa Statue

Every year by the beginning of the school year, which starts by January 8th in the North Pole, after every version of the Christmas gift-giving spirit has made their rounds, Santa needs to sit down to schedule the classes of the North Pole Community College. These elves need continuous education, and they need to really learn about those newfangled toys, apart from the tools and skills of the trade.

Plus it’s a good thing to have those elves occupied during the whole year in something practical and useful, so that they don’t start to invent practical jokes and play them on each other.

Since there are over one million elves, the NPCC is huge. But there’s also a huge problem assigning courses to classrooms. Once registration for classes is open, they talk to each other about what’s the ultimate blow off class, which one gives you extra credit for winning snowball fights. So you can’t just top up enrollment: every year, you need to check the available classrooms, and then match it to the class that will be the most adequate for it.

Here are the available classrooms:

Kris, 33
Kringle, 150
Santa, 120
Claus, 120
Niklaas, 110
Reyes Magos, 60
Olentzero, 50
Papa Noël, 30

They’re named after gift-giving spirits from all over the world, with the biggest class obviously named Kringle. In any given year, this could be the enrollment after the registration period is over.

Woodworking 101, 130
Toymaking 101, 120
Wrapping 210, 40
Reindeer speed driving 130, 30
ToyOps 310, 45
Ha-ha-haing 401, 33

They love woodworking 101, because it’s introductory, and they get to keep whatever assignment they do during the year. Plus you get all the wood parings for burning in your stove, something immensely useful in a place that’s cold all year long.

So Santa created this script to take care of it, using a bit of point free programming and Perl being Perl, the whippipitude and dwimmability of the two sister languages, Perl and Raku.

sub read-and-sort( $file where .IO.e ) {
    $file.IO.lines
      ==> map( *.split( /","\s+/) )
      ==> sort( { -$_[1].Int } )
      ==> map( { Pair.new( |@_ ) } )
}

say (read-and-sort( "classes.csv") Z read-and-sort( "courses.csv"))
    .map( {  $_.map( { .key } ).join( "\t→\t") }  )
    .join( "\n" )

The subroutine reads the file given its name, checking that it exists before, splits it by the comma, sorts it in decreasing number, and then creates a pair out of it. The other command uses the Z operator to zip the two lists together in decreasing order of elves, and produce a list just like this one:

Kringle	→	Woodworking 101
Santa	→	Toymaking 101
Claus	→	ToyOps 310
Niklaas	→	Wrapping 210
Reyes Magos	→	Ha-ha-haing 401
Olentzero	→	Reindeer speed driving 130

So the Kringle lecture hall gets woodworking, and it goes down from there. The Kris and Papa Noël classroom get nothing, having been eliminated but kept there to be used for extra-curricular activities such as carol singing and origami.

All this works well while it does: as long as you remember where’re the files, what the script did, nothing changes name or capacity, and the files are not lost. But those are a lot of ifs, and Santa is not getting any younger

As a matter of fact, not getting any older either.

So Santa and its ToyOps team will need a more systematic approach to this scheduling, by creating an object oriented application from requirements. After learning all about TWEAKS and roles, now it’s the time to stand back and put it to work from the very beginning.

Agile scheduling

The cold that pervades the North Pole makes everything a little less agile. But no worries, we can still be agile when we create something for IT operations there. First thing we need are user stories. Who wants to create a schedule and what is it? So let’s sit down and write them down.

OK, we got something to work on here, so we can apply a tiny bit of domain driven design. We have a couple of entities, classrooms and courses, and a few value objects: single classrooms and single courses. Let’s go and write them. Using Comma, of course.

Classes

Using classes for classes is only natural. But looking at the two defined classes, Santa couldn’t say which was which. At the end of the day, something with a name and a capacity is something with a name and a capacity. This begs for a factoring out of the common code, following the DRY (don’t repeat yourself) principle.

Besides, we have a prototype above that pretty much says that whatever we use for classrooms and courses, our life is going to be easier if we can sort it in the same way. So it’s probably best if we spin off a role with the common behavior. Let’s make an issue out of that. Pretty much the same as the US, but the protagonist is going to be a programmer:

Let’s call it Capped, as in having a certain capacity. Since both objects will have the self-same method, capacity, we can call it to sort them out. Our example above shows that we need to create an object out of a couple of elements, so that’s another issue

So finally our role will have all these things:

unit role Cap;

has Str $!name is required;
has Int $!capacity is required;

submethod new( Str $name, Int $capacity ) {
    self.bless( :$name, :$capacity )
}

submethod BUILD( :$!name, :$!capacity ) {};

method name() { $!name }
method capacity() { $!capacity }

plus handy accessors for name and capacity, all the while keeping these privates, also implying that they are immutable. Value objects are things that simply get a value, there’s not much business logic to them.

We could already create a function that does the sorting out of a list of classrooms/courses, that is, Caps, but in OO design we should try and put into classes (from which we spawn objects) as many entities from the original problem as possible. These entities will be the ones actually doing the heavy lifting.

It would be great, again, if we could create kinda the same things, because we will able to handle them uniformly. But there’s the conundrum: one of them will contain a list of Courses, another a list of Classrooms. They both do Cap, so in principle we could create a class that hosts lists of Caps. But this controller class will have some business logic: it will create objects of that class; we can’t simply use Roles to create classes that compose them. So we will use a curried Role, a parametrized role that uses, as a parameter, the role we’ll be instantiating it with. This will be Cap-List:

unit role Cap-List[::T];

has T @!list;

submethod new( $file where .IO.e ) {
    $file.IO.lines
            ==> map( *.split( /","\s+/) )
            ==> map( { T.new( @_[0], +@_[1] ) } )
            ==> sort( { -$_.capacity } )
            ==> my @list;
    self.bless( :@list );
}
submethod BUILD( :@!list ) {}

method list() { @!list }

This code is familiar and similar to what we’ve done above, except we’re swapping the object creation and sorting list, and we’re use .capacity to sort the list. We create a list and bless it into the object. Out of that, we create a couple of classes:

unit class Classroom-List does Cap-List[Classroom];
unit class Course-List does Cap-List[Course];

We don’t need any more logic; that’s all there is. It’s essentially the same thing, same business logic, but we’re working in a type-safe way. We have also tested the whole thing, so we’ve frozen the API and protected it from future evolution. Which Santa approves.

So we’re almost there. Let’s write the assignment function with this:

my $courses = Course-List.new( "docs/courses.csv");
my $classes = Classroom-List.new( "docs/classes.csv");
say ($classes.list Z $courses.list )
        .map( {  $_.map( { .name } ).join( "\t→\t") }  )
        .join( "\n" );

This returns the same thing as we had before. But we’ve hidden all business logic (sorting, and anything else we might want) in the object capsule.

But, have we?

Not actually. Assignment should also be encapsulated in some class, and thoroughly tested. That’s, however, left for another occasion.

Andrew Shitov: Raku Challenge, Week 92, Issue 1

Published by Andrew Shitov on 2020-12-22T08:24:00

This week’s task has an interesting solution in Raku. So, here’s the task:

You are given two strings $A and $B. Write a script to check if the given strings are Isomorphic. Print 1 if they are otherwise 0.

OK, so if the two strings are isomorphic, their characters are mapped: for each character from the first string, the character at the same position in the second string is always the same.

In the stings abc and def, a always corresponds to d, b to e, and c to f. That’s a trivial case. But then for the string abca, the corresponding string must be defd.

The letters do not need to go sequentially, so the strings aeiou and bcdfg are isomorphic too, as well as aeiou and gxypq. But also aaeeiioouu and bbccddffgg, or the pair aeaieoiuo and gxgyxpyqp.

The definition also means that the number of different characters is equal in both strings. But it also means that if we make the pairs of corresponding letters, the number of unique pairs is also the same, right? If a matches x, there cannot be any other pair with the first letter a.

Let’s exploit these observation:

sub is-isomorphic($a, $b) {
    +(([==] ($a, $b)>>.chars) && 
      ([==] ($a.comb, $b.comb, ($a.comb Z~ $b.comb))>>.unique));
}

First of all, the strings must have the same length.

Then, the strings are split into characters, and the number of unique characters should also be equal. But the collection of the unique pairs from the corresponding letters from both strings should also be of the same size.

Test it:

use Test;

# . . .

is(is-isomorphic('abc', 'def'), 1);
is(is-isomorphic('abb', 'xyy'), 1);
is(is-isomorphic('sum', 'add'), 0);
is(is-isomorphic('ACAB', 'XCXY'), 1);
is(is-isomorphic('AAB', 'XYZ'), 0);
is(is-isomorphic('AAB', 'XXZ'), 1);
is(is-isomorphic('abc', 'abc'), 1);
is(is-isomorphic('abc', 'ab'), 0);

* * *

→ GitHub repository
→ Navigation to the Raku challenges post series

Raku Advent Calendar: Day 22: What’s the point of pointfree programming?

Published by codesections on 2020-12-22T01:01:00

He had taken a new name for most of the usual reasons, and for a few unusual ones as well, not the least of which was the fact that names were important to him.

— Patrick Rothfuss, The Name of the Wind

If you’re a programmer, there’s a good chance that names are important to you, too. Giving variables and functions well chosen names is one of the basic tenets of writing good code, and improving the quality of names is one of the first steps in refactoring low-quality code. And if you both are a programmer and are at all familiar with Raku (renamed from “Perl 6” in 2019), then you are even more likely to appreciate the power and importance of names.

This makes the appeal of pointfree programming – which advocates for removing many of the names in your code – a bit mysterious. Given how helpful good names are, it can be hard to understand why you’d want to eliminate them.

This isn’t necessarily helped by some of the arguments put forward by advocates of pointfree programming (which is also sometimes called tacit programming). For example, one proponent of pointfree programming said:

Sometimes, especially in abstract situations involving higher-order functions, providing names for tangential arguments can cloud the mathematical concepts underlying what you’re doing. In these cases, point-free notation can help remove those distractions from your code.

That’s not wrong, but it’s also not exactly helpful; when reading that, I find myself thinking “sometimes; OK, when? in abstract situations; OK, what sort of situations?” And it seems like I’m not the only one with a similar set of questions, as the top Hacker News comment shows. Given arguments like these, I’m not at all surprised that many programmers dismiss pointfree programming in essentially the same way Wikipedia does: according to Wikipedia, pointfree programming is “of theoretical interest” but can make code “unnecessarily obscure”.

This view – though understandable – is both mistaken and, I believe, deeply unfortunate. Programming in a pointfree style can make code far more readable; done correctly, it makes code less obscure rather than more. In the remainder of this post I’ll explain, as concretely as possible, the advantage of coding with fewer names. To keep myself honest, I’ll also refactor a short program into pointfree style (the code will be in Raku, but both the before and after versions should be approachable to non-Raku programmers). Finally, I’ll close by noting a handful of the ways that Raku’s “there’s more than one way to do it” philosophy makes it easier to write clear, concise, pointfree code (if you want to).

The fundamental point of pointfree

I said before that names are important, and I meant it. My claim is the one that G.K. Chesterton (or his dog) might have made if only he’d cared about writing good code: we should use fewer names not because names are unimportant but precisely because of how important names are.

Let’s back up for just a minute. Why do names help with writing clear code in the first place? Well, most basically, because good names convey information. sub f($a, $b) may show you that you’ve got a function that takes two arguments – but it leaves you totally in the dark about what the function does or what role the arguments play. But everything is much clearer as soon as we add names: sub days-to-birthday($person, $starting-date). Suddenly, we have a much better idea what the function is doing. Not a perfect idea, of course; in particular, we likely have a number of questions of the sort that would be answered by adding types to the code (something Raku supports). But it’s undeniable that the names added information to our code.

So if adding names adds info, it’ll make your code clearer and easier to understand, right? Well, sure … up to a point. But this is the same line of thinking that leads to pages and pages of loan “disclosures”, each of which is designed to give you more information about the loan. Despite these intentions, anyone who has confronted a stack of paperwork the approximate size of the Eiffel Tower can attest that the cumulative effect of this extra info is to confuse readers and obscure the important details. Excessive names in code can fall into the same trap: even if each name technically adds info, the cumulative effect of too many names is confusion rather than clarity.

Here’s the same idea in different words: what names add to your code is not just extra info but also extra emphasis. And the thing about emphasis – whether it comes from bold, all-caps, or naming – is that it loses its power when overused. Giving everything a name is the same sort of error as writing in ALL-CAPS. Basically, don’t be this guy:

<Khassaki>:      HI EVERYBODY!!!!!!!!!!  
<Judge-Mental>:  try pressing the Caps Lock key  
<Khassaki>:      O THANKS!!! ITS SO MUCH EASIER TO WRITE NOW!!!!!!!  
<Judge-Mental>:  f**k me  

source (expurgation added, mostly to have an excuse to use the word expurgation).

I believe that the fundamental benefit of using pointfree programming techniques to write code with fewer names is that it allows the remaining names to stand out more – which lets them convey more information than a sea of names would do.

What does it mean to “understand” a line of code?

Do you understand this line of Raku code?

$fuel += $mass

Let’s imagine how a very literal programmer – we’ll call them Literal Larry – might respond. (Literal Larry is, of course, not intended to refer to Raku founder Larry Wall. That Larry may have been accused of various stylistic flaws over the years, but never of excessive literalness.)

Literal Larry might say, “Of course I understand what that line does! There’s a $fuel variable, and it’s incremented by the value of the $mass variable. Could it be any more obvious?”. But my response to Larry (convenient strawman that he is) would be, “You just told me what that line says, but not what it does. Without knowing more of the context around that line, in fact, we can’t know what that line does. Understanding that single – and admittedly simple! – line requires that we hold the context of other lines in our head. Worse, because it’s changing the value of one variable based on the value of another, understanding it requires us to track mutable state – one of the fastest ways to add complexity to a piece of code.”

And that sets up my second claim about coding in a pointfree style: It often reduces the amount of context/state that you need in your head to understand any given line of code. Pointfree code reduces the reliance on context/state in two ways: first, to the extent that we totally eliminate some named variables, then we obviously no longer need to mentally track the state of those variables. Less obviously (but arguably more importantly), a pointfree style naturally pushes you towards limiting the scope of your variables and reduces the number to keep track of at any one time. (You’ll see this in action as we work through the example below.)

A pointed example

Despite keeping our discussion as practical as possible, I worry that it has drifted a bit away from the realm of the concrete. Let’s remedy that by writing some actual code! I’ll present some code in a standard procedural style, refactor it into a more pointfree style, and discuss what we get out of the change.

But where should we get our before code? It needs to be decently written – my exchange with Literal Larry was probably enough strawmanning for one post, and I don’t want you to think that the refactored version is only an improvement because the original was awful. At the same time, it shouldn’t be great idiomatic Raku code, because that would mean using enough of Raku’s superpowers to reduce the code’s accessibility (I want to explain what’s going on in the after code, but don’t want to get bogged down teaching the before). It should also be just the right length – too short, and we won’t be able to see the advantages of reducing context; too long, and we won’t have space to walk through it in any detail.

Fortunately, the Raku docs provide the perfect before code: the Raku by example 101 code. This simple script is not idiomatic Raku; it’s a program that does real (though minimal) work while using only the very basics of Raku syntax. Here’s how that page describes the script’s task:

Suppose that you host a table tennis tournament. The referees tell you the results of each game in the format Player1 Player2 | 3:2, which means that Player1 won against Player2 by 3 to 2 sets. You need a script that sums up how many matches and sets each player has won to determine the overall winner.

The input data (stored in a file called scores.txt) looks like this:

Beth Ana Charlie Dave
Ana Dave | 3:0
Charlie Beth | 3:1
Ana Beth | 2:3
Dave Charlie | 3:0
Ana Charlie | 3:1
Beth Dave | 0:3

The first line is the list of players. Every subsequent line records a result of a match.

I believe that the code should be legible, even to programmers who have not seen any Raku. The one hint I’ll provide for those who truly haven’t looked at Raku (or Perl) is that @ indicates that a variable is array-like, % indicates that it’s hashmap-like, and $ is for all other variables. If any of the other syntax gives you trouble, check out the full walkthrough in the docs.

Here’s the 101 version:

use v6d;
# start by printing out the header.
say "Tournament Results:\n";

my $file  = open 'scores.txt'; # get filehandle and...
my @names = $file.get.words;   # ... get players.

my %matches;
my %sets;

for $file.lines -> $line {
    next unless $line; # ignore any empty lines

    my ($pairing, $result) = $line.split(' | ');
    my ($p1, $p2)          = $pairing.words;
    my ($r1, $r2)          = $result.split(':');

    %sets{$p1} += $r1;
    %sets{$p2} += $r2;

    if $r1 > $r2 {
        %matches{$p1}++;
    } else {
        %matches{$p2}++;
    }
}

my @sorted = @names.sort({ %sets{$_} }).sort({ %matches{$_} }).reverse;

for @sorted -> $n {
    my $match-noun = %matches{$n} == 1 ?? 'match' !! 'matches';
    my $set-noun   = %sets{$n} == 1 ?? 'set' !! 'sets';
    say "$n has won %matches{$n} $match-noun and %sets{$n} $set-noun";
}

OK, that was pretty quick. It uses my to declare 13 different variables; let’s see what it would look like if we declare 0. Before I start, though, one note: I said that the code above isn’t idiomatic Raku, and the code below won’t be either. I’ll introduce considerably more of Raku’s syntax where it makes the code more tacit, but I’ll still steer clear of some forms I’d normally use that aren’t related to refactoring the code in a more pointfree style. I also won’t make unrelated changes (e.g., removing mutable state) that I’d normally include. Finally, this code also differs from typical Raku (at least the way I write it) by being extremely narrow. I typically aim for a line length under 100 characters, but because I’d like this to be readable on pretty much any screen, these lines never go above 45.

With that throat-clearing out of the way, let’s get started. Our first step is pretty much the same as in the 101 code; we open our file and iterate through the lines.

open('scores.txt')
  ==> lines()

You can already see one of the key pieces of syntax we’ll be using to adopt a pointfree style: ==>, the feed operator. This operator takes the result from open('scores.txt') and passes it to lines() as its final argument. (This is similar to, but not exactly the same as, calling a .lines() method on open('scores'). Most significantly, ==> passes a value as the last parameter to the following function; calling a method is closer to passing a value as the first parameter.)

Now we’re dealing with a list of all the lines in our input file – but we don’t actually need all the lines, because some are useless (to us) header lines. We’ll solve this in basically the same way we would on the command line: by using grep to limit the lines to just those we care about. In this case, that means just those that have the ” | ” (space-pipe-space) delimiter that occurs in all valid input lines.

  ==> grep(/\s '|' \s/)

A few syntax notes in passing: first, Raku obviously has first-class support for regular expressions. Second, and perhaps more surprisingly, note that Raku regexes default to being _in_sensitive to whitespace; /'foo' 'bar'/ matches ‘foobar’, not ‘foo bar’. Finally, Raku regexes require non-alphabetic characters to be enclosed in 's before they match literally.

After using grep to limit ourselves to the lines we care about, we’re dealing with a sequence of lines something like Ana Dave | 3:0. Our next task is to convert these lines into something more machine readable. Since we just went over the regex syntax, let’s stick with that approach.

  ==> map({
      m/ $<players>=( (\w+)  \s (\w+) ) 
                       \s    '|'  \s
         $<sets-won>=((\d+) ':' (\d+) )/;
      [$<players>[0], $<sets-won>[0]],
      [$<players>[1], $<sets-won>[1]]
  })

This uses the Regex syntax we introduced above and adds a bit on top. Most importantly, we’re now naming our capture groups: we have one capture group named players that captures the two space-separated player names before the | character. (Apparently our tournament only identifies players with one-word names, a limitation that was present in the 101 code as well.) And the sets-won named capture group extracts out the :-delimited set results.

Once we’ve captured the names and scores for that match, we associate the correct scores with the correct names and create a 2×2 matrix/nested array with our results.

Actually, though, we’re not quite done with everything we want to do inside this map – we’ve given meaning to the order of the elements within each row, but the order of the rows themselves is currently meaningless. Let’s fix that by sorting our returned array so that the winner is always at the front:

      [$<players>[0], $<sets-won>[0]],
      [$<players>[1], $<sets-won>[1]]
      ==> sort({-.tail})

With this addition, our code so far is:

open('scores.txt')
  ==> lines()
  ==> grep(/\s '|' \s/)
  ==> map({
      m/ $<players>=( (\w+)  \s (\w+) ) 
                       \s    '|'  \s
         $<sets-won>=((\d+) ':' (\d+) )/;
      [$<players>[0], $<sets-won>[0]],
      [$<players>[1], $<sets-won>[1]]      
      ==> sort({-.tail})
  })

At this point, we’ve processed our input lines into arrays; we’ve gone from something like Ana Dave | 3:0 to something a bit like

[ [Ana,  3],
  [Dave, 0] ]

Now it’s time to start combining our separate arrays into a data structure that represents the results of the entire tournament. As in most languages these days, Raku does this with reduce (some languages call the same operation fold). We’re going to use reduce to build a single hashmap out of our list of nested arrays. However, before we can do so, we’re going to need to add an appropriate initial value to reduce onto (here, an empty Hash).

Raku gives us a solid half-dozen ways to do so – including specifying an initial value when you call reduce, much like you would in modern JavaScript. I’m going to accomplish the same thing differently, both because it’s more fun and because it lets me introduce you to 5 useful pieces of syntax in just 10 characters, which may be some sort of a record. Here’s the line:

  ==> {%, |$_}()

OK, there’s a lot packed in there! Let’s step through it. {...} is Raku’s anonymous block (i.e., lambda) syntax. So {...}() would normally create an anonymous block and then call it without any arguments. However, as we said above, ==> automatically passes the return value of its left-hand side as the final argument to its right-hand side. So ==> {...}() calls the block with the value that was fed into the ==>.

Since this block doesn’t specify a signature (more on that very shortly), it doesn’t have any named parameters at all; instead, any values the block is called with are placed in the topic variable – which is accessed with $_. Putting what we have so far together, we can show a complex (but succinct!) way to do nothing: ==> {$_}(). That expression feeds a value into a block, loads the value into the topic variable, and then returns it without doing anything at all.

Our line did something, however – after all, we have 4 more characters and 2 new concepts left in our line! Starting at the left, we have the % character, which you may recognize as the symbol that indicates that a variable is hash-like (Associative, if we’re being technical). On its own like this, it effectively creates an empty hash – which we could also have done with Hash.new, {}, or %(), but I like % best here. And the , operator, which we’ve already used without remarking on, combines its arguments into a list.

Here’s an example using the syntax we’ve covered so far:

[1, 2, 3] ==> {0, $_}()

That would build a list out of 0 and [1, 2, 3]. Specifically, it would build a two-element list; the second element would be the array [1, 2, 3]. That is not quite what we want, because we want to add % onto the front of our existing list instead of creating a new and more nested list.

As you may have guessed, the final character we have left – | – solves this problem for us. This Slip operator is one of my favorite bits of Raku cleverness. (Well, top 20 anyway – there are a lot of contenders!) The | operator transforms a list into a Slip, which is “a kind of List that automatically flattens”, as the docs put it. In practice, this means that Slips merge into lists instead of becoming single elements in them. To return to our earlier example,

[1, 2, 3] ==> {0, |$_}()

produces the four-element list (0, 1, 2, 3) instead of the two-element list (0, [1, 2, 3]) we got without the |.

Putting all this together, we are now in a position to easily understand the ~15 character (!) line of code we’ve been talking about. Recall that we’d just used map to transform our list of lines into a list of 2×2 matrices. If we’d printed them out, we would see something kind of like:

( [ [Ana,  3],
    [Dave, 0] ],
  ...
)

When we feed this array into the {%, |$_}() block, we slip it into a list with the empty hash, and end up with something like:

( {},
  [ [Ana,  3],
    [Dave, 0] ],
  ...
)

With that short-but-dense line out of the way, we can proceed on to calling reduce. As in many other languages, we’ll pass in a function for reduce to use to combine our values into a single result value. We’ll do this with the block syntax we just introduced (see, taking our time on that line is already starting to pay off!). So it will look something like this:

==> reduce( { 
    # Block body goes here
})

Before filling in that body, though, let’s say a word about signatures (I told you it’d come up soon). As we discussed, when you don’t specify a signature for a block, all the arguments passed to the block get loaded into the topic variable $_. We can do anything we need to by manipulating/indexing into the topic variable, but that could get pretty verbose. Fortunately, we can specify the signature for a block by placing parameter names between -> and the opening {. Thus, -> $a, $b { $a + $b } is a block that accepts exactly two arguments and returns the sum of its arguments.

In our case, we know that the first argument to reduce is going to be the hash we’re building up to track the total wins in the tournament and the second will be the 2×2 array that represents the results of the next match. That gives us a signature of

==> reduce(-> %wins, @match-results { 
    # Block body goes here
})

So, how do we fill in the body? Well, since we previously sorted the array we’re now calling @match-results, we know that the first row contains the person who won the most sets (and therefore the match). More specifically, the first element in the first row contains that person’s name. So we want the first element of the first row – that is, the element that would be at (0, 0) if our array were laid out in 2D. Fortunately, Raku supports directly indexing into multi-dimensional arrays, so accessing this name is as simple as @match-results[0;0]. This means we can update our hash to account for the match winner with

      %wins{@match-results[0;0]}<matches>++;

Handling the sets is very similar – the biggest difference is that we iterate through both rows of @match-results instead of indexing into the first row:

      for @match-results -> [$name, $sets] {
          %wins{$name}<sets> += $sets;
      }

Note the -> [$name, $sets] signature above. This shows Raku’s strong support for destructuring assignment, another key tool in avoiding explicit assignment statements. -> [$a, $b] tells Raku that the block accepts a single array with two elements in it and assigns names to each. It’s equivalent to writing -> @array { my $a = @array[0]; my $b = @array[1]; ... }.

(And if the idea of using destructuring assignment to avoid assignment feels like cheating in terms of pointfree style, then hold that thought because we’ll come back to it when we get to the end of this example.)

At the end of our reduce block, we need to return the %wins hash we’ve been building. Putting it all together gives us

  ==> reduce(-> %wins, @match-results {
      %wins{@match-results[0;0]}<matches>++;
      for @match-results -> [$name, $sets] {
          %wins{$name}<sets> += $sets;
      }
      %wins
  })

At this point, we’ve built a hash-of-hashes that contains all the info we need; we’re done processing our input. Specifically, our hash contains keys for each of the player names in the tournament; the value of each is a hash showing that player’s total match and set wins. It looks a bit like this:

{ Ana  => { matches => 2,
            sets    => 8 }
  Dave => ...,
  ...
}

This contains all the information we need but not necessarily in the easiest shape to work with for generating our output. Specifically, we would like to print results in a particular order (winners first) but we have our data in a hash, which is inherently unordered. Thus – as happens so often – we need to reshape our data from the shape that was the best fit for processing our input data into the shape that is the best fit for generating our output.

Here, that means going from a hash-of-hashes to a list of hashes. We do so by first transforming our hash into a list of key-value pairs and then mapping that list into a list of hashes. In that map, we need to add the player’s name (info that was previously stored in the key of the outer hash) into the inner hash – if we skipped that step, we wouldn’t know which scores went with which players.

Here’s how that looks:

  ==> kv()
  ==> map(-> $name, %_ { %{:$name, |%_} })

I’ll note, in passing, that our map uses both destructuring assignment and the | slip operator to build our new hash. After this step, our data looks something like

( { name    => "Ana",
    matches => 2,
    sets    => 8 }
  ...
)

This list isn’t inherently unordered the way a hash is, but we haven’t yet put it in any meaningful order. Let’s do so now.

  ==> sort({.<matches>, .<sets>, .<name>})
  ==> reverse()

Note that this preserves the somewhat wackadoodle sort order from the original code: sort by match wins, high to low; break ties in matches by set wins; break ties in set wins by reverse alphabetical order.

At this point, we have all our output data organized properly; all that is left is to format it for printing. When printing our output, we need to use the correct singular/plural affixes – that is, we don’t want to say someone won “1 sets” or “5 set”.

Let’s write a simple helper function to handle this for us. We could obviously write a function that tests whether we need a singular or plural affix, but instead let’s take this chance to look at one more Raku feature that makes it easier to write pointfree code: multi-dispatch functions that perform different actions based on how they’re called.

The function we want should accept a key-value pair and return the singular version of the key when the associated value is 1; otherwise, it should return the plural version of the key. Let’s start by stating what all versions of our function have in common using a proto statement:

proto kv-affix((Str, Int $v) --> Str) {{*}}

A few things to know about that proto statement: This is the first time we’ve added type constraints to our code, and they work just about as you’d expect. kv-affix can only be called with a string as its first argument and an integer as its second (this protects us from calling it with the key and value in the wrong order, for example). It’s also guaranteed to return a string. Additionally, note that we can destructure using a type (Str, here) without needing to declare a variable – handy for situations like this, where we want to match on a type without needing to use the value.

Finally, note that the proto is entirely optional; indeed, I don’t think that I’d necessarily use one here. But I would have felt remiss if we didn’t discuss Raku’s support for type constraints, which is generally quite helpful in writing pointfree code (even if we haven’t really needed it today).

Next, let’s handle the case where we need to return the singular version of the key:

multi kv-affix(($_, 1)) { S/e?s$// }

As you can see, Raku lets us destructure/pattern match with literals – this version of our multi will only be invoked when kv-affix is called with 1 as its second argument. Additionally, notice that we’re destructuring the first parameter into $_, the special topic variable. Setting the topic variable not only lets us use that variable without giving it a name, but it also enables all the tools Raku reserves for the current topic. (If we want these tools without destructuring into the topic variable, we can also set the topic with with or given.)

Setting the topic to the key we’re modifying is helpful here because it lets us use the S/// non-destructive substitution operator. This operator matches a regex against the topic and then returns the string that results from replacing the matched portion of the string. Here, we match 0 or 1 e’s (e?) followed by an ‘s’, followed by the end of the string ($). We then replace that ‘s’ or ‘es’ with nothing, effectively trimming the plural affix from the string.

The final multi candidate is trivial. It just says to return the unaltered plural key when the previous multi candidate didn’t match (that is, when the value isn’t 1).

multi kv-affix(($k, $)) { $k }

(We use $ as a placeholder for a parameter when we don’t need to care about its type or its value.)

With those three lines of code, we now have a little helper function that will give us the correct singular/plural version of our keys. In all honesty, I’m not sure it was actually worth using a multi here. This might be a situation where a simple ternary condition – something like sub kv-affix(($_, $v)) { $v ≠ 1 ?? $_ !! S/e?s$// } – might have done the trick more concisely and just as clearly. But that wouldn’t have given us a reason to talk about multis, and those are just plain fun.

In any event, now that we have our helper function, formatting each line of our output is fairly trivial. Below, I do so with the venerable C-style sprintf, but Raku offers many other options for formatting textual output if you’d prefer something else.

  ==> map({
      "%s has won %d %s and %d %s".sprintf(
          .<name>,
          .<matches>, kv-affix(.<matches>:kv),
          .<sets>,    kv-affix(.<sets>:kv   )
      )})

And once we’ve formatted each line of our output, the final step is to add the appropriate header, concatenate our output lines, and print the whole thing.

  ==> join("\n", "Tournament Results:\n")
  ==> say();

And we’re done.

Evaluating our pointfree refactor

Let’s take a look at the code as a whole and talk about how it went.

use v6d;
open('scores.txt')
  ==> lines()
  ==> grep(/\s '|' \s/)
  ==> map({
      m/ $<players>=( (\w+)  \s (\w+) ) 
                       \s    '|'  \s
         $<sets-won>=((\d+) ':' (\d+) )/;
      [$<players>[0], $<sets-won>[0]],
      [$<players>[1], $<sets-won>[1]]
      ==> sort({-.tail}) })
  ==> {%, |$_}()
  ==> reduce(-> %wins, @match-results {
      %wins{@match-results[0;0]}<matches>++;
      for @match-results -> [$name, $sets] {
          %wins{$name}<sets> += $sets;
      }
      %wins })
  ==> kv()
  ==> map(-> $name, %_ { %{:$name, |%_} }) 
  ==> sort({.<matches>, .<sets>, .<name>})
  ==> reverse()
  ==> map({
      "%s has won %d %s and %d %s".sprintf(
          .<name>,
          .<matches>, kv-affix(.<matches>:kv),
          .<sets>,    kv-affix(.<sets>:kv) )})
  ==> join("\n", "Tournament Results:\n")
  ==> say();

proto kv-affix((Str, Int) --> Str) {{*}}
multi kv-affix(($_, 1)) { S/e?s$// }
multi kv-affix(($k, $)) { $k }

So, what can we say about this code? Well, at 32 lines of code, it’s longer than the 101 version (and, even though these lines are pretty short, it’s longer by character count as well). So this version doesn’t win any prizes for concision. But that was never our goal.

So how does it do on the goal we started out with – reducing assignments? Well, if we channel Literal Larry, we can say that it has zero assignment statements; it never assigns a value to a variable with my $name = 'value' or similar syntax. In contrast, the 101 code used my to assign to a variable over a dozen times. So, from a literal perspective, we succeeded.

But, as we already noted, ignoring destructuring assignment feels very much like cheating. Similarly, using named captures in a regex is essentially a form of assignment/naming. So, if we adopt an inclusive view of assignment, the 101 code has 15 assignments and our refactored code has 6. So a significant drop, but nothing like an order of magnitude difference.

But trying to evaluate our refactor by counting assignment statements is probably a fool’s errand to begin with. What I really care about – and, I suspect, what you care about too – is the clarity of our code. To some degree, that’s inherently subjective and depends on your personal familiarity and preferences – by my lights, ==> {%, |$_}() is extremely clear. Maybe, after we spent 3 paragraphs on that line, you might agree; or you might not – and I doubt anything further I could say would change your mind. So, by my lights, the refactored code looks clearer.

But clarity is not entirely a subjective matter. I argue that the refactored code is objectively clearer – and in exactly the ways the pointfree style is supposed to promote. Back at the beginning of this post, I claimed that writing tacit code has two main benefits: it provides better emphasis in your code, and it reduces the amount of context you need to hold in your head to understand any particular part of the code. Let’s look at each of these in turn.

In terms of emphasis, there’s one question I like to ask: what identifiers are in scope at the global program (or module) scope? Those identifiers receive the most emphasis; in an ideal world, they would be the most important. In the refactored code, there are no variables at all in the global scope and only one item: the kv-affix function. This function is appropriately in the global scope since it is of global applicability (indeed, it could even be a candidate to be factored out into a separate module if this program grew).

Conversely, in the 101 code the global-scope variables are $file, @names, %matches, %sets, and @sorted. At least a majority of those are pure implementation details, undeserving of that level of emphasis. And some (though this bleeds into the “context” point, discussed below) are downright confusing in a global scope. What does @names refer to, globally? How about %matches? (does it change your answer if I tell you that Match is a Raku type?) What about %sets? (also a Raku type). Of course, you could argue that these names are just poorly chosen, and I wouldn’t necessarily disagree. But coming up with good variable names is famously hard, and figuring out names that are clear in a global scope is even harder – there are simply more opportunities for conceptual clash.

To really emphasize this last point, take a look at the final line of the refactored code:

multi kv-affix(($k, $)) { $k }

If the name $k occurred in a global context, it would be downright inscrutable. It could be an iteration variable (old-school programmers tend to start with i, and then move on to j and k). It could stand for degrees Kelvin or, oddly enough, the Coulomb constant. Or it could be anything, really.

But because its scope is more limited, the meaning is clear. The function takes a key-value pair (typically generated in Raku with the .kv method or the :kv adverb) and is named kv-affix. Given those surroundings, it’s no mystery at all that $k stands for “key”. Keeping items out of the global scope both provides better emphasis and provides a less confusing context to evaluate the meaning of different names.

The second large benefit I claimed for pointfree code is that it reduces the amount of context/state you need to hold in your head to understand any given bit of code. Comparing these two scripts also supports this point. Take a look at the last line of the 101 code:

say "$n has won %matches{$n} $match-noun and %sets{$n} $set-noun";

Mentally evaluating this line requires you to know the value of $n (defined 3 lines above), $match-noun (2 lines above), $set-noun (1 line), %sets (24 lines), and %matches (25 lines). Considering how simple this script is, that is a lot of state to track!

In contrast, the equivalent portion of the refactored code is

"%s has won %d %s and %d %s".sprintf(
    .<name>,
    .<matches>, kv-affix(.<matches>:kv),
    .<sets>,    kv-affix(.<sets>:kv) )

Evaluating the value of this expression only requires you to know the value of the topic variable (defined one line up) and the pure function kv-affix (defined 3–5 lines below). This is not an anomaly: every variable in the refactored code is defined no more than 5 lines away from where it is last used.

(Of course, writing code in a pointfree style is neither sufficient nor necessary to limit the scope of variables. But as this example illustrates – and my other experience backs up – it certainly helps.)

Raku supports pragmatic (not pure) pointfree programming

A true devotee of pointfree programming would likely object to the refactored code on the grounds that its not nearly tacit enough. Despite avoiding explicit assignment statements, it makes fairly extensive use of named function parameters and destructuring assignment; it just isn’t pure.

Nevertheless, the refactored code sits in a pragmatic middle ground that I find highly productive: it’s pointfree enough to gain many of the clarity, context, and emphasis benefits of that style without being afraid to use a name or two when that adds clarity.

And this middle ground is exactly where Raku shines (at least in my opinion! It’s entirely possible to write Raku in a variety of different styles and many of them are not in the least bit pointfree).

Here are some of the Raku features that support pragmatic pointfree programming (most, but not all, of which we saw above):

If you’re already a Raku pro, I hope this list and this post have given you some ideas for some other ways to do it. If you’re new to Raku, I hope this post has gotten you excited to explore some of the ways Raku could expand the way you program. And if you’re totally uninterested in writing Raku code – well, I hope you’ll reconsider, but even if you don’t, I hope that this post gave you something to think about and left you with some ideas to try out in your language of choice.

Raku Advent Calendar: Day 21: The Story Of Elfs, and Roles, And Santas’ Enterprise

Published by vrurg on 2020-12-21T01:01:00

Let’s be serious. After all, we’re grown up people and know the full truth about Santa: he is a showman, and he is a top manager of Santa’s family business. No one knows his exact position, because we must not forget about Mrs.Santa whose share in running the company is at least equal. The position is not relevant to our story anyway. What is important though is that running such a huge venture requires a lot of skills. Not to mention that the venture itself is also a tremendous show on its own, as one can find out from documentaries like The Santa Clause and many other filmed over the last several decades of human history.

What would be the hardest part of running The North Pole Inc.? Logistics? Yeah, but with all the magic of the sleds, and the reindeers, and the Christmas night this task is not that hard to be done. Manufacturing? This task has been delegated to small outsourcing companies like Lego, Nintendo, and dozens others across the globe.

What else remains? The employees. Elves. And, gosh!, have you ever tried to organize them? Don’t even think of trying unless you have a backup in form of a padded room served by polite personnel with reliable supply of pills where you’d be spending your scarce vacation days. It’s an inhumane task because when one puts together thousands, if not millions, as some estimations tell, of ambitious stage stars (no elf would ever consider himself as a second-plane actor!), each charged with energy amount more appropriate to a small nuclear reactor… You know…

How do Santas manage? Sure, they’re open-hearted, all-forgiving beyond an average human understanding. But that’s certainly not enough to build a successful business! So, there must be a secret ingredient, something common to both commercial structures and shows. And I think it’s well done role assignment which turns the brownian motion of elf personalities into a self-organized structure.

In this article I won’t be telling how Raku helps Santas sort things out or ease certain tasks. Instead I’ll try to describe some events happening within The North Pole company with help of the Raku language and specifically its OO capabilities.

Elves

Basically, an elf is:

class Elf is FairyCreature {...}

For some reason, many of them don’t like this definition; but who are we to judge them as long, as many humans still don’t consider themselves as a kind of apes? Similarly, some elves consider fairy to be archaic and outdated and not related to them, modern beings.

But I digress…

The above definition is highly oversimplified. Because if we start traversing the subtree of the FairyCreature class we gonna find such diverse species in it like unicorns, goblins, gremlins, etc., etc., etc. Apparently, there must be something else, defining the difference. Something what would provide properties sufficiently specific to each particular kind of creature. If we expand the definition of the Elf class we gonna see lines like these:

class Elf is FairyCreature {
    also does OneHead;
	  also does UpperLimbs[2];
    also does LowerLimbs[2];
    also does Magic;
    ...
}

I must make a confession here: I wasn’t allowed to see the full sources. When requested for access to the fairy repository the answer was: “Hey, if we reveal everything it’s not gonna look like magic anymore!” So, some code here is real, and some has been guessed out. I won’t tell you which is which; after all, let’s keep it magic!

Each line is a role defining a property, or a feature, or a behavior intrinsic to a generic elf (don’t mess it up with the spherical cow). So, when we see a line like that we say that: Elf does Magic ; or, in other words: class Elf consumes role Magic.

I apologize for not explaining in details to Raku newcomers what a role is; hopefully the link will be helpful here. For those who knows Java (I’m a dinosaur, I don’t), a role is somewhat similar to interfaces but better. It can define attributes and methods to be injected into a consuming class; it can require certain methods to be defined; and it can specify what other roles the class will consume, and other classes it will inherit from.

As a matter of fact, because of complexity of elf species, the number of roles they do is too high to mention all of them here. Normally, when a class consumes only few of them, it’s OK to write the code in another way:

class DisgustingCreature is FairyCreature does Slimy does Tentacles[13] { ... }

But numerous roles are better be put into class body with the prefix also.

When Santas Hire An Elf

This would probably be incorrect to say that elves are hired by Santas. In fact, as we know it, all of them do work for The North Pole Inc. exclusively. Yet, my thinking is such that at some point in the history the hiring did take place. Let’s try to imagine how it could have happen.

One way or another, there is a special role:

role Employee {...}

And there is a problem too: our class Elf is already composed and immutable. Moreover, each elf is an object of that class! Or, saying the same in Raku language: $pepper-minstix.defined && $pepper-minstix ~~ Elf. Ok, what’s the problem? If one tries to $pepper-minstix.hire(:position("security officer"), :company("The North Pole Inc."), ...) a big boom! will happen because of no such method ‘hire’ for invocant of type ‘Elf’. Surely, the boom! is expected because long, long time ago elves and work were as compatible as Christmas and summer! But then there was Santas. And what they did is called mixing a role into an object:

$pepper-minstix does Employee;

From the outside the operation does adds content of Employee role to the object on its left hand side, making all role attributes and methods available on the objet. Internally it creates a new class for which the original $pepper-minstix.WHAT is the only parent class; and which consumes the role Employee. Eventually, after the does operator, say $pepper-minstix.WHAT will output something like (Elf+{Employee}). This is now the new class of the object held by $pepper-minstix variable.

Such a cardinal change in life made elves much happier! Being joyful creatures anyway, they now got a great chance to be also useful by sharing their joy with all the children, and sometimes not only children. The only thing worried them though. You know, it’s really impossible to find two identical people; the more so there’re no two identical elves. But work? Wouldn’t it level them all down in a way? Santas wouldn’t be the Santas if they didn’t share these worries with their new friends. To understand their solution let’s see what Employee role does for us. Of the most interest to us are the following lines:

has $.productivity;
has $.creativity;
has $.laziness;
has $.position;
has $.speciality;
has $.department;
has $.company;

For simplicity, I don’t use typed attributes in the snippet, though they’re there in the real original code. For example, $.lazyness attribute is a coefficient among other things used in a formula calculating how much time is spent for coffee or eggnog breaks. The core of the formula is something like:

method todays-cofee-breaks-length {
    $.company.work-hours * $.laziness * (1 + 0.2.rand)
}

Because they felt their responsibility for the children, elves agreed to limit their maximum laziness level. Therefore the full definition of the attribute is something like:

has Num:D $.laziness where * < 0.3;

If anybody thinks that the maximum is too high then they don’t have the Christmas spirit in their hearts! Santa Claus was happy about it, why wouldn’t we? I personally sure his satisfaction is well understood because his own maximum is somewhere closer to 0.5, but – shh! – let’s keep it a secret!

Having all these characteristics in place, Santas wanted to find a way to set them to as diverse combinations, as possible. And here is what they came up with something similar to this:

role Employee {
    ...
    method my-productivity {...}
    method my-creativity {...}
    method my-laziness {...}
    submethod TWEAK {
        $!productivity //= self.my-productivity;
        $!creativity //= self.my-creativity;
        $!laziness //= self.my-laziness;
    }
}

Now it was up to an elf to define their own methods to set the corresponding characteristics. But most of them were OK with a proposed special role for this purpose:

role RandomizedEmployee {
    method my-productivity { 1 - 0.3.rand }
    method my-creativity { 1 - 0.5.rand }
    method my-laziness { 1 - 0.3.rand }
}

The hiring process took the following form now:

$pepper-minstix does Employee, RandomizedEmployee;

But, wait! We have three more attributes left behind! Yes, because these were left up to Santas to fill in. They knew what kind of workers and where they needed most. Therefore the final version of the hiring code was more like:

$pepper-minstix does Employee(
        :company($santas-company),
        :department($santas-company.department("Security")),
        :position(HeadOfDepartment),
        :speciality(GuardianOfTheSecrets),
        ...
    ), 
    RandomizedEmployee;

With this line the Raku’s mixin protocol does the following:

  1. creates a new mixin
  2. sets attributes defined with named parameters
  3. invoke role’s constructor TWEAK
  4. returns a new employee object

Because everybody knew that the whole thing is going to be a one-time venture as elves would never leave their new boss alone, the code was a kind of trade-off between efficiency and speed of coding. Still, there were some interesting tricks used, but discussing them is beyond this story mainline. I think many readers can find their own solutions to the problems mentioned here.

Me, in turn, moves on to a story which took place not long ago…

When Time Is Too Scarce

It was one of those craziest December days, when Mrs.Santa left for an urgent business trip. Already busy with mails and phone calls Mr.Santa got additional duties in the logistics and the packing departments which are usually handled by his wife. There were no way he could skip those or otherwise risks of something going wrong on the Christmas night would be too high. The only way to get everywhere on time was to cut on the phone calls. It meant to tell the elf-receptionist to answer with the “Santa is not available” message.

Santa sighed. He could almost see and hear the elf staring at him with deep regret and asking: “Nicholas, are you asking me to lie?” Oh, no! Of course he wouldn’t ask, but…

But? but! After all, even if the time/space magic of the Christmas is not available on other days of year, Santa can still do other kind of tricks! So, here is what he did:

role FirewallishReceptionist {
    has Bool $.santa-is-in-the-office;
    has Str $.not-available-message;
    method answer-a-call {
        if $.santa-is-in-the-office {
            self.transfer-call: $.santas-number;
        }
        else {
            self.reply-call: $.not-available-message, 
                                :record-reply,
                                :with-marry-christmas;
        }
    }
}

my $strict-receptionist = 
    $receptionist but FirewallishReceptionist(
        :!santa-is-in-the-office, 
        :not-available-message(
            "Unfortunately, Santa is not available at the moment."
            ~ ... #`{ the actual message is longer than this }
        )
    );

$company.give-a-day-off: $receptionist;
$company.santa-office-frontdesk.assign: :receptionist($strict-receptionist);

The operator but is similar to does, but instead of altering its left hand side operand its clone is created and then mixes in the right hand side role into the clone.

Just imagine the amazement of the receptionist when he saw his own copy taking his place at his desk! But a day off is a day off, he wasn’t really much against applying his laziness coefficient to the rest of that day…

As to Santa himself… He has never been really proud of what he done that day. Even though it was needed in the name of saving the Christmas. Besides, the existence of a clone created a few awkward situations later, especially when both elves were trying to do the same work while still sharing some data structures. But that’s a separate story on its own…

When New Magic Helps

Have you seen the elves this season? They’re always very strict in sticking to the latest tendencies of Christmas fashion: rich colors, spangles, all fun and joy! Yet this year is really something special!

It all started at the end of the spring. Santa was sitting in his chair, having well-deserved rest of the last Christmas he served. The business did not demand as much attention as it usually does at the end of the autumn. So, he was sitting by the fireplace, drinking chocolate, and reading the news. Though the news was far from being the best part of Santa’s respite (no word about 2020!). Eventually, Santa put away his tablet, made a deep sip from his giant mug, and said aloud: “Time to change their caps!” No idea what pushed him to this conclusion, but from this moment on elves knew that a new fashion is coming!

The idea Santa wanted to implement was to add WiFi connection and LEDs to elvish caps, and to make the LEDs twinkle with patterns available from a local server of The North Pole Inc. Here is what he started with:

role WiFiConnect {
    has $.wifi-name is required;
    has $.wifi-user is required;
    has $.wifi-password is required;
    submethod TWEAK { 
        self.connect-wifi( $!wifi-name, $!wifi-user, $!wifi-password );
    }
}

role ShinyLEDs {
    submethod TWEAK { 
        if self.test-cirquits {
            self.LED( :on );
        }
        if self ~~ WiFiConnect {
            self.set-LED-pattern: self.fetch( :config-key<LED-pattern> );
        }
    }
}

class ElfCap2020 is ElfCap does WiFiConnect does ShinyLEDs {...}

Note, please, that I don’t include the body of the class here for it’s too big for this article.

But the attempt to compile the code resulted in:

Method 'TWEAK' must be resolved by class ElfCap2020 because it exists in multiple roles (ShinyLEDs, WiFiConnect)

“Oh, sure thing!” – Santa grumbled to himself. And added a TWEAK submethod to the class:

    submethod TWEAK {
        self.R1::TWEAK;
        self.R2::TWEAK;
    }

This made the compiler happy. and ElfCap2020.new came up with a new and astonishingly fun cap instance! “Ho-ho-ho!” – Santa couldn’t help laughing of joy. It was the time to start producing the new caps for all company employees; and this was the moment when it became clear that mass production of the new cap will require coordinated efforts of so many third-party vendors and manufacturers that there were no way to equip everybody with the new toy by the time the Christmas comes.

Does Santa give up? No, he never does! What if we try to modernize the old caps? It would only require so many LEDs and controllers and should be feasible to handle on time!

Suit the action to the world! With a good design it should be no harder than to:

$old-cap does (WiFiConnect(:$wifi-name, :$wifi-user, :$wifi-password), ShinyLEDs);

And… Boom! Method 'TWEAK' must be resolved by class ElfCap+{WiFiConnect,ShinyLEDs} because it exists in multiple roles (ShinyLEDs, WiFiConnect)

Santa sighed. No doubt, this was expected. Because does creates an implicit empty class the two submethods from both roles clash when compiler tries to install them into the class. A deadend? No way! Happy endings is what Santa loves! And he knows what to do. He knows that there is a new version of the Raku language is in development. It is not released yet, but is available for testing with Rakudo compiler if requested with use v6.e.PREVIEW at the very start of a compilation unit which is normally is a file.

Also, Santa knows that one of the changes the new language version brings in is that it keeps submethods where they were declared, no matter what. It means that where previously a submethod was copied over from a role into the class consuming it, it will now remain be the sole property of the role. And the language itself now takes care of walking over all elements of a class inheritance hierarchy, including roles, and invoking their constructor and/or destructor submethods if there’re any.

Not sure what it means? Check out the following example:

use v6.e.PREVIEW;
role R1 {
    submethod TWEAK { say ::?ROLE.^name, "::TWEAK" }
}
role R2 {
    submethod TWEAK { say ::?ROLE.^name, "::TWEAK" }
}
class C { };
my $obj = C.new;
$obj does (R1, R2);
# R1::TWEAK
# R2::TWEAK

Apparently, adding use v6.e.PREVIEW at the beginning of the modernization script makes the $old-cap does (WiFiConnection, ShinyLEDs); line work as expected!

Moreover, switching to Raku 6.e also makes submethod TWEAK unnecessary for ElfCap2020 class too if its only function is to dispatch to role TWEAKs. Though, to be frank, Santa kept it anyway as he needed a few adjustments to be done at the construction time. But the good thing is that it wasn’t necessary for him to worry that much about minor details of combining all the class components together.

And so the task was solved. At the first stage all the old caps were modernized and made ready before the season started and Christmas preparations took all remaining spare time of The North Pole Inc. company. The new caps will now be produced without extra fuss and be ready by the 2021 season. The time spared Santa used to adapt WiFiConnection and ShinyLEDs to use them with his sleds too. When told by The Security Department that additional illumination makes sled’s camouflaging much harder if ever possible Santa only shrugged and replied: “You’ll manage, I have my trust in you!” And they did, but that’d be one more story…

Happy End

When it comes to The North Pole it’s always hard to tell the truth from fairy tales, and to separate magic from the science. But, after all, a law of nature tells it that any sufficiently advanced technology is indistinguishable from magic. With Raku we try to bring a little bit of good magic into this life. It is so astonishing to know that Raku is supported by nobody else but the Santa family themselves!

Merry Christmas and Happy New Year!

Andrew Shitov: Advent of Code 2020 Day 18/25 in the Raku programming language

Published by Andrew Shitov on 2020-12-18T22:07:44

Today there’s a chance to demonstrate powerful features of Raku on the solution of Day 18 of this year’s Advent of Code.

The task is to print the sum of a list of expressions with +, *, and parentheses, but the precedence of the operations is equal in the first part of the problem, and is opposite to the standard precedence in the second part.

In other words, 3 + 4 * 5 + 6 is (((3 + 4) * 5) + 6) in the first case and (3 + 4) * (5 + 6) in the second.

Here is the solution. I hope you are impressed too.

use MONKEY-SEE-NO-EVAL;
 
sub infix:<m>($a, $b) { $a * $b }

say [+] ('input.txt'.IO.lines.race.map: *.trans('*' => 'm')).map: {EVAL($_)}

The lines with the expressions come from the file input.txt. For each line, I am replacing * with m, which I earlier made an infix operator that actually does multiplication.

For the second part, we need our m to have lower precedence than +. There’s nothing simpler:

sub infix:<m>($a, $b) is looser<+> { $a * $b }

Parsing and evaluation are done using EVAL.

* * *

→ Browse the code on GitHub
→ See all blog posts on Advent of Code 2020

p6steve: Raku Santa Emoticon [}:]>*

Published by p6steve on 2020-12-16T16:08:05

Santa has been fretting about the most concise way to use his personal emoticon [}:]>* programatically in a raku one-liner.

The best he can do is…

raku -e 'sub santa($x is copy){$x~~s/ <[}:]>* /claus/; $x.say}; santa(":")'
#OUTPUT claus

Can you help him? If so – please send your version via the Comments field below.

The rules are:

(i) to use raku (the language formerly known as perl6), perl5 and other languages will be considered too

(ii) to use the character sequence [}:]>* (or reversed, no spaces)

(iii) these characters must have semantic meaning in the code (ie. to just as a comment)

(iv) it should be a one-liner that uses raku -e on the command line

The objective criteria is: shortest count of chars between the ”

Kudos will also be given to the overall Christmassy-ness … in the example, a call to ‘santa’ creates output ‘claus’

Merry Christmas to one and all (and any)

PS. Code golf is very, very naughty, so be sure not to do this in your day job!

Andrew Shitov: The second wave of Covid.observer

Published by Andrew Shitov on 2020-12-15T22:15:33

When I started covid.observer about seven months ago, I thought there would be no need to update it after about 3-4 months. In reality, we are approaching to the end of the year, and I will have to fix the graphs which display data per week, as the week numbers will very soon make a loop.

All this time, more data arrived, and I also made it even more by adding a separate statistics for the regions of Russia, with its 85 subdivisions, which brought the total count of countries and regions up to almost 400.

mysql> select count(distinct cc) from totals;
+--------------------+
| count(distinct cc) |
+--------------------+
|                392 |
+--------------------+
1 row in set (0.00 sec)

Due to frequent updates that changes data in the past, it is not that easy to make incremental update of statistics, and again, I did not expect that I’ll run the site for so long.

mysql> select count(distinct date) from daily_totals;
+----------------------+
| count(distinct date) |
+----------------------+
|                  329 |
+----------------------+
1 row in set (0.00 sec)

The bottom line is that daily generation became clumsy and not smooth. Before summer, the whole website could be regenerated in less than 15 minutes, but now it turned to 40-50 minutes. And I tend to do it twice a day, as a fresh portion of today’s Russian data arrives a few hours after we’ve got a daily update by the Johns Hopkins University (for yesterday’s stats).

But the most scary signals began after the program started crashing with quite unpleasant errors.

Latest JHU data on 12/12/20
Latest RU data on 12/13/20
Generating impact timeline...
Generating world data...
MoarVM panic: Unable to initialize event loop
Failed to open file /Users/ash/Projects/covid.observer/COVID-19/csse_covid_19_data/csse_covid_19_daily_reports_us/12-10-2020.csv: Too many open files
Generating impact timeline...
Generating world data...
Not enough positional arguments; needed at least 4
  in sub per-capita-data at /Users/ash/Projects/covid.observer/lib/CovidObserver/Statistics.rakumod (CovidObserver::Statistics) line 1906

The errors were not consistent, and I managed to re-run the program by pieces to get the update. But none of the errors were easily explainable.

MoarVM panic gives no explanation, but it completely disappears if I run the program in two parts:

$ ./covid.raku fetch
$ ./covid.raku generate

instead of a combined run that both fetches the data and generates the statistics:

$ ./covid.raku update

The Too many open files is a strange one as while I process the files in loops, I do not intentionally keep them open. But that error seems to be solved by changing system settings:

$ ulimit -n 10000

The final error, Not enough positional arguments; needed at least 4, is the weirdest. Such thing happens when you call a function that expects a different number of arguments. That never occurred for months after all bugs were found and fixed. It can only be explained by the new piece of data. Indeed, it may happen that some data is missing, but I believe I already found all the cases where I need to provide the function calls with default zero values.

Having all that, and the fact that the program run takes dozens of minutes before you can catch an error, it was quite frustrating.

And here comes Liz!

She proposed to look into the things and actually spent the whole day by first installing the code and all its requirements and then by actually doing that job to run, debug, and re-run. By the end of the day she created a pull request, which made the program twice as fast!

Let’s look at the changes. There are three of them (but no, they do not directly answer the above-mentioned three error messages).

The first two changes introduce parallel processing of countries (remember, there are about 400 of what is considered a unique $cc in the program).

my %country-stats = get-known-countries<>.race(:1batch,:8degree).map: -> $cc {
   $cc => generate-country-stats($cc, %CO, :%mortality, :%crude, :$skip-excel)
}

Calling .race on the result of get-known-countries() function improves the previously sequential processing of countries. Indeed, their stats are computed independently, so there’s no reason for one country to wait for another. The parameters of race, the batch size and the number of workers, can probably be tuned to fit your hardware.

The second change is similar, but for another part of the code where the continents are processed in a loop:

for %continents.keys.race(:1batch,:8degree) -> $cont {
   generate-continent-stats($cont, %CO, :$skip-excel);
}

Finally, the third change is to make some counters native integers instead of Raku Ints:

my int $c = $confirmed[$index] // 0;
my int $f = $failed[$index] // 0;
my int $r = $recovered[$index] // 0;
my int $a = $active[$index] // 0;

I understand that this reduces both the memory and the processing time of these variables, but for some reason it also eliminated the error in counting function parameters.

And finally, I want to mention the <> thing that you may have noticed in the first code change. This is the so-called decontainerization operator. What it does is illustrated by this example from the documentation:

use JSON::Tiny;

my $config = from-json('{ "files": 3, "path": "/home/some-user/raku.pod6" }');

say $config.raku;
# OUTPUT: «${:files(3), :path("/home/some-user/raku.pod6")}» 

my %config-hash = $config<>;
say %config-hash.raku;
# OUTPUT: «{:files(3), :path("/home/some-user/raku.pod6")}»

The $config variable is a scalar variable that keeps a hash. To work with it as with a hash, the variable is decontainerized as $config<>. This gives us a proper hash %config-hash.

I think that’s it for now. The main advantage of the above changes is that the program now needs less than 25 minutes to re-generate the whole site and it does not fail.

Well, but it became a bit louder too as Rakudo uses more cores 🙂

Thanks, Liz!

Jo Christian Oterhals: Five books on swimming

Published by Jo Christian Oterhals on 2020-11-16T20:20:28

Five books on open water swimming

Recently I have read a lot of books on swimming — which, if you knew me, would seem unexpected. Having a fear of water after a near-drowning accident as a child, I never became a swimmer. Not even a so-so swimmer: I managed to learn what we in Norway call “Grandma swimming”, a sort of laborious and slow breast swimming with the head as high above water as humanly possible and the feet similarly low beneath.

But many years later, as an adult and a father, this slowly changed when my oldest son started attending swim practice. Even before taking up swimming as a sport, he had surpassed my abilities by a decent margin. After he became serious about training he almost instantly dwarfed me and my abilities.

As parents of swimmers know, being a swim parent involves lots of driving to-and-from and perhaps even more waiting. Sometimes I killed time waiting for him outside the pool area, looking in through the large glass windows that separated spectators —aka annoying parents — from swimmers. From a distance I was amazed by the progress he made month by month.

One summer day a year into his training I stood on a lake’s edge watching him swim happily towards the opposite side. When he passed the middle a couple of hundred feet out, I was struck by an uncomfortable thought: If anything happened to him now, I wouldn’t be able to help. And had I tried, I would probably need help myself.

In that very moment I decided to something about it. I immediately signed up for a beginner’s swim course for adults. But ten weeks and ten lessons in, hanging from the pool side panting uncontrollably, I was struck by a second thought: The progress I’d seen in the children was impossible to match for us, the slightly overweigh 40+ year old newbies. It would take time and patience to become even a so-so swimmer. And, as it turned out, it would take a lot of patience: A few years have passed and only recently have I started to feel that I master freestyle slightly — if swimming 50 meters freestyle without passing out constitutes mastering, that is. My technique is still laughable, breath continues to be an issue, and I haven’t begun to tumble turn or back stroke yet.

So now I know: This takes time.

But to boost my motivation I turned to books — like I do every time I start becoming interested in a new subject. These are not instructional or teach-yourself books, but inspirational books about the topic I’m interested in. In this case about swimmers that does unimaginable feats and/or about the history and cultural impact of swimming.

As they work as inspiration for me, maybe they will for you too. That’s why I give you quick reviews of five swimming related books I’ve read the last months. They are: Swimming to Antarctica by Lynne Cox, The art of resilience by Ross Edgley, Why we swim by Bonnie Tsui, Open Water by Mikael Rosén, and Grayson, also by Lynne Cox.

Lynne Cox: Swimming to Antarctica

This is the autobiography of the accomplished open water swimmer Lynne Cox. It starts in the seventies when Lynne’s family moves from the US east coast to California, so that the children can maximise their swim training. It’s here Lynne discovers that she’s a better long distance swimmer than a sprint swimmer and gradually switches to open water swimming.

Soon she participates in her first long distance swim —a 20 mile swim from Catalina island to mainland Los Angeles— and discovers that she has the potential for record-breaking pace. It seems like she’s a natural at the discipline. Later she will learn that her body is unique in preserving body heat in cold water conditions.

Next up are more feats, such as swimming the English channel. That one earns her an invitation to Cairo to swim the Nile, etc. While all this is happening, she starts to form an idea of becoming the first to swim from the US to the Soviet Union. Lynne sees this as a way to establish bonds and reduce tension between the people of the two countries. Alas, hardly anyone shares her enthusiasm, so large parts of the book covers the quest of getting the necessary permits to cross between two island on the Alaskan and Siberian side of the Bering strait.

Lynne Cox swimming the Cook strait in New Zealand (1975)

That process took maybe ten years and is, to me, the book’s heart and soul. Swimming aside, this tenaciousness is a testament of her ability to persevere not only in open water but also the intricacy and bureaucracy that is international politics. As such I think Swimming through the Iron Curtain would be a more fitting title than Swimming to Antarctica. But the book is written chronologically and ends with a swim to the Antarctica, so I guess that’s why they chose the book’s title.

A weakness with the book is that it’s unusually light, almost coy, is when it comes to Lynne’s relation to other humans. Her parents, which must have been an important part of her support, is peculiarly described as almost faceless entities in her vicinity. As for romantic relations, she occasionally alludes rather vaguely to how she enjoys the company of a certain individual or how she admires the muscular body of a fellow swimmer, etc. But relations are never described deeper than that, and never more than with a few sentences. That means that this autobiography is unusually auto: Her book is a story about herself and her inner journey powered by external journeys — swims that most people can only dream of.

But no matter what the story is called or what weaknesses it may have, it’s a great read about an extraordinary human. That’s why I recommend this book.

Ross Edgley: The art of resilience

I don’t remember how I stumbled across Ross Edgeley and his “Great British Swim”, but I guess Google’s impenetrable algorithms had something to do with it. Regardless of how — when I did discover him (2018) he had just started his Red Bull sponsored swim around Great Britain, and posted weekly videos about his progress on YouTube. He synchronised his efforts to the tides for the duration of the swim: For 157 days he swam with the currents for six hours and rested the following six hours aboard his support boat. Non stop. For the entirety of the journey he never once set foot on land.

When he started the journey the farewell was rather low key, as the turnout consisted of family and friends. When he finished he’d become a household name and was welcomed by hundreds of other open water swimmer as well as large crowds on the beach. And that was well earned, if you ask me: 157 days — initially they thought they use half of that time — and 2884 kilometers (1792 miles) later, he (and his crew) had completed a feat that I think will stay uncontested for a long time.

Ross Edgley at Margate having just finished his swim around Great Britain (2018)

In short, Ross’s story is an exciting one and he writes really well about it. That part of the book is impeccable. Strangely, the weakest parts are where Ross’s background as a sports scientist comes in. He’s eager to share theories about how to train, explain how endurance vs strength works, suggest workouts, etc. Every chapter ends with these science based musings. But they’re not integrated well into the storyline — yes, these too are filled with Ross’s enthusiasm, but all they do for me is punctuate and slow down an otherwise engaging story.

What I find peculiar, however, is that if you never watched the youtube videos, in the book it seems as if he got the idea and that everything fell into place by itself. In real life a Red Bull sponsorship was what made the swim possible. They kept him fed and enabled him to keep a boat and a four person crew with him at all times during the 157 days (but he’s gratuitous towards the boat’s captain and attributes much of the success to him).

It’s also interesting that this book is the opposite of Lynne Cox’s memoirs in the sense that what Ross is mostly concerned with is the external journey itself. There are some hints of musings about how the swim influenced his personal development, but they are few and far between.

In the grand scheme of things, though, these are small flaws. If you manage to fight through the sports science this is a great read!

Lynne Cox: Grayson

This book covers one very special day in Lynne Cox’s life — a day that weren’t covered in her autobiography. One early morning around daybreak, the seventeen year old Lynne is midway through a solitary open water swim practice. Suddenly she experiences unusual disturbances in the water, only to discover that it’s caused by a baby gray whale.

What seems to be a fun encounter quickly turns into a more serious matter: Communicating with an experienced elderly man on shore, she realises that the baby whale has been separated from his mother. If she swims ashore the infant will follow her, strand and die. The story details how Lynne slowly coaches the whale out to deeper water in the hope that they’ll by chance will find his mother.

Where Swimming to Antartica was a book as much about Lynne’s inner journeys as her outer, Grayson is even more of an inner journey. The book’s style reflects this. Grayson has a far more lyrical, introspective and even pensive form than her first book.

That’s not only positive. As mentioned it covers the events of this one morning only. The only perspective is Lynne’s told in the present tense. To stretch a small story about one morning from one person’s perspective to the necessary 150 pages, a lot of the text is inner monologue. In my opinion that slows the narrative down, and not in a good way. The inner monologues become fillers that doesn’t drive story.

What’s worse is that much of the inner monologue doesn’t seem entirely believable. The amount of depth and detail that Lynne allegedly remembers events and thoughts with, is more than anyone — with the possible expection of Marilu Henner — can recall some 30 odd years later. In addition many of the thoughts and reflections the 17 year old Lynne supposedly has, are astonishingly mature and filled with knowledge she possibly couldn’t have had at the moment . Scientific facts about gray whales, for instance. These are the thoughts and retrospections of a person in their late forties. There’s nothing wrong with thoughts and retrospections from late forty-somethings — after all I’m one myself. And had they been presented as such, as present day reflections of that extraordinary morning in her teens, this would probably not feel alien to the story at all. But the the choice to attribute the thoughts of a soon-to-be 50 year old to a 17-year old ends — to me —up as a significant stylistic crash.

With that in mind, I can’t help but think that if her editor had cut most of this, they’d end up with a tight and great story driven book for adolescents/young adults. As it is now, it’s not. But if you’re a less critical reader than me you’ll get a reasonably engaging book about the inner and outer journey of an almost superhuman swimmer. Should you only want to read only one book by Lynne Cox, however, Swimming to Antartica is the better choice.

Mikael Rosén: Open Water

Swedish author Mikael Rosén’s Open Water is not only about swimming itself, but also about swimming’s history, technique, science, cultural implications, racial issues, and more. Although you’d imagine that a mashup of all that would end up… well… mashy, the book is surprisingly clear and interesting despite juggling many sprawling subject. As such the book really delivers on its subtitle The History and Technique of Swimming.

Although this book talks about specific swimmers such as the pre-WW2 olympian Johnny Weissmuller, the first man to swim across the English channel, captain Matthew Webb, or modern athletes like Michael Phelphs, this is really not a book about individuals. These people are used to illustrate topics such as improvement in sports science (Weissmuller vs the Japanese swimmers that followed) or the history of open water swimming (captain Webb). Consistently interesting throughout, the most interesting part may be the second of the total eight chapter. That section explores prejudice — how female swimmers started to appear on the scene and suddenly break records previously held by men, or how a white, racist population’s negative reaction to black swimmers at public pools contributed to the establishment and strengthening of segregation laws in the southern states of the USA.

This tour de force of interesting and surprising facts reads a little like if Bill Bryson had written a book about swimming, though less humorous. But still, it’s almost on that level. Most books are not perfect, however, and this book is no exception: Written three years before Ross Edgeley’s The Art of Resilience, it shares the latter’s insistence of closing each chapter with a little sports science, training programs, suggestions of drills, etc. They don’t bother me as much in this book — as opposed to Ross’s — as this book is not a chronological story driven narrative. Therefore the training parts fits a little better into the whole. But the book wouldn’t suffer if they’d been edited out.

All in all that’s minor critisism, so I recommend this book wholeheartedly.

Bonnie Tsui: Why we swim

It’s not a coincidence that this book comes last. The reason is that this book is best described in the context of the previously mentioned books.

Why we swim is in a way an amalgamation of the science/history aspects of Mikael Rosén’s Open Water and the introspection of Lynne Cox’s books. But where the latter describes her personal growth in retrospect, Bonnie Tsui documents her quest for personal growth through swimming more or less as it happens — as a part of the process of writing the book itself it seems.

Bonnie Tsui kicks her book off with the story of Guðlaugur Friðþórsson, a fisherman that was the sole survivor after a fishing vessel sank in the frigid winter waters off Iceland. Together with two mates he started to swim towards land, but not long after he was the only one still swimming. Against all odds Friðþórsson survived a six hour swim in six degree celcius water.

For Tsui this becomes the entry point to the history of swimming. Her book is structured around five main topics, going from Survival, Well-Being, Community, Competition and ultimately to the more metaphysical and meditative subject of Flow. She takes on a tour of swimming history, starting in the stone age and the first documentation we have of humans swimming, ending with personal musings about not why we swim, but why she swims.

And this inclusion of a very visible I throughout the book — the chapter about Friðþórsson is not only about Friðþórsson but also about her meeting him and her participation in a swim honoring him—means that you can’t separate her personal journey from her exploration of the history, culture and science of swimming. Granted, Open Water is more hard core when it comes to facts, but the unique interspersion of the author personal story and the overarching topics of the book, makes this the most beautifully written of the five books I’ve mentioned here. Read it!

So… do you become a better swimmer by reading? Of course not. Only practice can improve swimming (although you may pick up some valuable hints to how you can improve through pure instructional books, such as Terry Laughlin’s Total Immersion).

But this being November 2020, the year of Covid-19, all swimming pools are closed and I’m unable to practice and improve for a while. This may be the case for you too. But while you wait for the pools to open again or the summer to heat up the sea to a more welcoming temperature, spending some time on one or more of these books wouldn’t be the worst thing to do.

Who knows? Maybe you’ll come back to the water more inspired than before.

6guts: Taking a break from Raku core development

Published by jnthnwrthngtn on 2020-10-05T19:44:26

I’d like to thank everyone who voted for me in the recent Raku Steering Council elections. By this point, I’ve been working on the language for well over a decade, first to help turn a language design I found fascinating into a working implementation, and since the Christmas release to make that implementation more robust and performant. Overall, it’s been as fun as it has been challenging – in a large part because I’ve found myself sharing the journey with a lot of really great people. I’ve also tried to do my bit to keep the community around the language kind and considerate. Receiving a vote from around 90% of those who participated in the Steering Council elections was humbling.

Alas, I’ve today submitted my resignation to the Steering Council, on personal health grounds. For the same reason, I’ll be taking a step back from Raku core development (Raku, MoarVM, language design, etc.) Please don’t worry too much; I’ll almost certainly be fine. It may be I’m ready to continue working on Raku things in a month or two. It may also be longer. Either way, I think Raku will be better off with a fully sized Steering Council in place, and I’ll be better off without the anxiety that I’m holding a role that I’m not in a place to fulfill.

p6steve: Machine Math and Raku

Published by p6steve on 2020-09-29T21:57:33

In the machine world, we have Int and Num. A High Level Language such as Raku is a abstraction to make it easier for humans. Humans expect decimal math 0.1 + 0.2 = 0.3 to work.

Neither Int nor Num can do this! Huh? How can that be?

Well Int is base-2 and decimals are base-10. And Num (IEEE 754 Floating Point) is inherently imprecise.

Luckily we have Rat. Raku automatically represent decimals and other rational numbers as a ratio of two integers. Taking into account all the various data shuffling any program creates, the performance is practically as fast as Int. Precision, accuracy and performance are maintained for the entire realm of practical algebraic calculations.

And when we get to the limit of Rat (at around 3.614007241618348e-20), Raku automatically converts to a Num. After this, precision cannot be recovered – but accuracy and dynamic range are maintained. Modern Floating Point Units (FPUs) make Num nearly as fast as Int so performance is kept fast.

Here, we have smoothly transitioned to the world of physical measurements and irrational numbers such as square root and logarithms. It’s a one way trip. (Hint: If you want a Num in the first place, just use ‘e’ in the literal.)

Good call, Raku!

Footnote: FatRat and BigInt are not important for most use cases. They incur a performance penalty for the average program and rightly belong in library extensions, not the core language.

my Timotimo \this: Can you AppImageine that?

Published by Timo Paulssen on 2020-09-24T15:38:32

Can you AppImageine that?
Photo by Susan Holt Simpson / Unsplash
Can you AppImageine that?

I have been unsatisfied with the installation process for MoarPerf for a little while now. You have to either compile the javascript (and css) yourself using npm (or equivalent) which takes a little while, or you have to rely on me releasing an "everything built for you" version to my github repo.

The last few days I've repeatedly bonked my metaphorical pickaxe against the stone wall that is unpleasantly long build times and an endless stream of small mistakes in order to bring you MoarPerf in an AppImage.

AppImage is a format for linux programs that allows programs to be distributed as a single executable file without a separate install process.

The AppImage for MoarPerf includes a full Rakudo along with the dependencies of MoarPerf, the built javascript, and the Raku code. This way you don't even have to have a working Rakudo installation on the machine you want to use to analyze the profiler results. Yours truly tends to put changes in MoarVM or nqp or Rakudo that sometimes prevent things from working fine, and resetting the three repos back to a clean state and rebuilding can be a bit of an annoyance.

With the MoarPerf AppImage I don't have to worry about this at all any more! That's pretty nice.

AppImages for Everyone!

With an AppImage created for MoarPerf it was not too much work to make an AppImage for Rakudo without a built-in application.

The next step is, of course, to pack everything up nicely to create a one-or-two-click solution to build AppImages for any script that you may be interested in running.

There has also already been a module that creates a windows installer for a Raku program by installing a custom MoarVM/nqp/Rakudo into a pre-determined path (a limitation from back when Rakudo wasn't relocatable yet), and maybe I should offer an installer like this for windows users, too? The AppImage works much like this, too, except it already makes use of the work that made Rakudo relocatable, so it doesn't need to run in a pre-defined path.

If you want to give building AppImages a try as well, feel free to steal everything from the rakudo-appimage repository, and have a look at the .travis.yml and the appimage folder in the moarperf repo!

In any case, I would love to hear from people, whether the AppImages for Rakudo and MoarPerf work on their machines, and what modules/apps they would like to have in an AppImage. Feel free to message me on twitter, write to the Raku users mailing list, or find me on freenode as timotimo.

Thanks for reading, stay safe, and see y'all later!

my Timotimo \this: How would you like a 1000x speed increase

Published by Timo Paulssen on 2020-07-01T20:09:08

How would you like a 1000x speed increase

Good, that's the click-baity title out of the way. Sorry for taking such a long time to write again! There really has been everything going on.

To get back into blogging, I've decided to quickly write about a change I made some time ago already.

This change was for the "instrumented profiler", i.e. the one that will at run-time change all the code of the user's program, in order to measure execution times and count up calls and allocations.

In order to get everything right, the instrumented profiler keeps an entire call graph in memory. If you haven't seen something like it yet, imagine taking stack traces at every point in your program's life, and all these stack traces put together make all the paths in the tree that point at the root.

This means, among other things, that the same function can come up multiple times. With recursion, the same function can in fact come up a few hundred times "in a row". In general, if your call tree can become both deep and wide, you can end up with a whole truckload of nodes in your tree.

How would you like a 1000x speed increase
Photo by Gabriel Garcia Marengo / Unsplash

Is it a bad thing to have many nodes? Of course, it uses up memory. Only a single path on the tree is ever interesting at any one moment, though. Memory that's not read from or written to is not quite as "expensive". It never has to go into the CPU cache, and is even free to be swapped out to disk and/or compressed. But hold on, is this really actually the case?

It turns out that when you're compiling the Core Setting, which is a code file almost 2½ megabytes big with about 71½ thousand lines, and you're profiling during the parsing process, the tree gets enormous. At the same time, the parsing process slows to a crawl. What on earth is wrong here?

Well, looking at what MoarVM spends most of its time doing while the profiler runs gives you a good hint: It's spending almost all of its time going through the entirety of the tree for garbage collection purposes. Why would it do that, you ask? Well, in order to count allocated objects at every node, you have to match the count with the type you're allocating, and that means you need to hold on to a pointer to the type, and that in turn has to be kept up to date if anything moves (which the GC does to recently-born things) and to make sure types aren't considered unused and thrown out.

That's bad, right? Isn't there anything we can do? Well, we have to know at every node which counter belongs to which type, and we need to give all the types we have to the garbage collector to manage. But nothing forces us to have the types right next to the counter. And that's already the solution to the problem:

Holding on to all types is now the job of a little array kept just once per tree, and next to every counter there's just a little number that tells you where in the array to look.

This increases the cost of recording an allocation, as you'll now have to go to a separate memory location to match types to counters. On the other hand, the "little number" can be much smaller than before, and that saves memory in every node of the tree.

More importantly, the time cost of going through the profiler data is now independent of how big the tree is, since the individual nodes don't have to be looked at at all.

With a task as big as parsing the core setting, which is where almost every type, exception, operator, or sub lives, the difference is a factor of at least a thousand. Well, to be honest I didn't actually calculate the difference, but I'm sure it's somewhere between 100x faster and 10000x faster, and going from "ten microseconds per tree node" to "ten microseconds per tree" isn't a matter of a single factor increase, it's a complexity improvement from O(n) to O(1). As long as you can find a bigger tree, you can come up with a higher improvement factor. Very useful for writing that blog post you've always wanted to put at the center of a heated discussion about dishonest article titles!

Anyway, on testing my patch, esteemed colleague MasterDuke had this to say on IRC:

timotimo: hot damn, what did you do?!?! stage parse only took almost twice as long (i.e., 60s instead of the normal 37s) instead of the 930s last time i did the profile

(psst, don't check what 930 divided by 60 is, or else you'll expose my blog post title for the fraud that it is!)

Well, that's already all I had for this post. Thanks for your attention, stay safe, wear a mask (if by the time you're reading this the covid19 pandemic is still A Thing, or maybe something new has come up), and stay safe!

How would you like a 1000x speed increase
Photo by Macau Photo Agency / Unsplash

rakudo.org: Rakudo Star Release 2020.01

Published on 2020-02-24T00:00:00

my Timotimo \this: Introducing: The Heap Snapshot UI

Published by Timo Paulssen on 2019-10-25T23:12:36

Introducing: The Heap Snapshot UI

Hello everyone! In the last report I said that just a little bit of work on the heap snapshot portion of the UI should result in a useful tool.

Introducing: The Heap Snapshot UI
Photo by Sticker Mule / Unsplash

Here's my report for the first useful pieces of the Heap Snapshot UI!

Last time you already saw the graphs showing how the number of instances of a given type or frame grow and shrink over the course of multiple snapshots, and how new snapshots can be requested from the UI.

The latter now looks a little bit different:

Introducing: The Heap Snapshot UI

Each snapshot now has a little button for itself, they are in one line instead of each snapshot having its own line, and the progress bar has been replaced with a percentage and a little "spinner".

There are multiple ways to get started navigating the heap snapshot. Everything is reachable from the "Root" object (this is the norm for reachability-based garbage collection schemes). You can just click through from there and see what you can find.

Another way is to look at the Type & Frame Lists, which show every type or frame along with the number of instances that exist in the heap snapshot, and the total size taken up by those objects.

Type & Frame Lists

Introducing: The Heap Snapshot UI

Clicking on a type, or the name or filename of a frame leads you to a list of all objects of that type, all frames with the given name, or all frames from the given file. They are grouped by size, and each object shows up as a little button with the ID:

Introducing: The Heap Snapshot UI

Clicking any of these buttons leads you to the Explorer.

Explorer

Here's a screenshot of the explorer to give you an idea of how the parts go together that I explain next:

Introducing: The Heap Snapshot UI

The explorer is split into two identical panels, which allows you to compare two objects, or to explore in multiple directions from one given object.

There's an "Arrow to the Right" button on the left pane and an "Arrow to the Left" button on the right pane. These buttons make the other pane show the same object that the one pane currently shows.

On the left of each pane there's a "Path" display. Clicking the "Path" button in the explorer will calculate the shortest path to reach the object from the root. This is useful when you've got an object that you would expect to have already been deleted by the garbage collector, but for some reason is still around. The path can give the critical hint to figure out why it's still around. Maybe one phase of the program has ended, but something is still holding on to a cache that was put in as an optimization, and that still has your object in it? That cache in question would be on the path for your object.

The other half of each panel shows information about the object: Displayed at the very top is whether it is an object, a type object, an STable, or a frame.

Below that there is an input field where you can enter any ID belonging to a Collectable (the general term encompassing types, type objects, stables, and frames) to have a look.

The "Kind" field needs to have the number values replaced with human-readable text, but it's not the most interesting thing anyway.

The "Size" of the Collectable is split into two parts. One is the fixed size that every instance of the given type has. The other is any extra data an instance of this type may have attached to it, that's not a Collectable itself. This would be the case for arrays and hashes, as well as buffers and many "internal" objects.

Finally, the "References" field shows how many Collectables are referred to by the Collectable in question (outgoing references) and how many Collectables reference this object in question.

Below that there are two buttons, Path and Network. The former was explained further above, and the latter will get its own little section in this blog post.

Finally, the bottom of the panel is dedicated to a list of all references - outgoing or incoming - grouped by what the reference means, and what type it references.

Introducing: The Heap Snapshot UI

In this example you see that the frame of the function display from elementary2d.p6 on line 87 references a couple of variables ($_, $tv, &inv), the frame that called this frame (step), an outer frame (MAIN), and a code object. The right pane shows the incoming references. For incoming references, the name of the reference isn't available (yet), but you can see that 7 different objects are holding a reference to this frame.

Network View

The newest part of the heap snapshot UI is the Network View. It allows the user to get a "bird's eye" view of many objects and their relations to each other.

Here's a screenshot of the network view in action:

Introducing: The Heap Snapshot UI

The network view is split into two panes. The pane on the left lists all types present in the network currently. It allows you to give every type a different symbol, a different color, or optionally make it invisible. In addition, it shows how many of each type are currently in the network display.

The right pane shows the objects, sorted by how far they are from the root (object 0, the one in Layer 0, with the frog icon).

Each object has one three-piece button. On the left of the button is the icon representing the type, in the middle is the object ID for this particular object, and on the right is an icon for the "relation" this object has to the "selected" object:

This view was generated for object 46011 (in layer 4, with a hamburger as the icon). This object gets the little "map marker pin" icon to show that it's the "center" of the network. In layers for distances 3, 2, and 1 there is one object each with a little icon showing two map marker pins connected with a squiggly line. This means that the object is part of the shortest path to the root. The third kind of icon is an arrow pointing from the left into a square that's on the right. Those are objects that refer to the selected object.

There is also an icon that's the same but the arrow goes outwards from the square instead of inwards. Those are objects that are referenced by the selected object. However, there is currently no button to have every object referenced by the selected object put into the network view. This is one of the next steps I'll be working on.

Customizing the colors and visibility of different types can give you a view like this:

Introducing: The Heap Snapshot UI

And here's a view with more objects in it:

Introducing: The Heap Snapshot UI

Interesting observations from this image:

Next Steps

You have no doubt noticed that the buttons for collectables are very different between the network view and the type/frame lists and the explorer. The reason for that is that I only just started with the network view and wanted to display more info for each collectable (namely the icons to the left and right) and wanted them to look nicer. In the explorer there are sometimes thousands of objects in the reference list, and having big buttons like in the network view could be difficult to work with. There'll probably have to be a solution for that, or maybe it'll just work out fine in real-world use cases.

On the other hand, I want the colors and icons for types to be available everywhere, so that it's easier to spot common patterns across different views and to mark things you're interested in so they stand out in lists of many objects. I was also thinking of a "bookmark this object" feature for similar purposes.

Before most of that, the network viewer will have to become "navigable", i.e. clicking on an object should put it in the center, grab the path to the root, grab incoming references, etc.

There also need to be ways to handle references you're not (or no longer) interested in, especially when you come across an object that has thousands of them.

But until then, all of this should already be very useful!

Here's the section about the heap snapshot profiler from the original grant proposal:

Looking at the list, it seems like the majority of intended features are already available or will be very soon!

Easier Installation

Until now the user had to download nodejs and npm along with a whole load of javascript libraries in order to compile and bundle the javascript code that powers the frontend of moarperf.

Fortunately, it was relatively easy to get travis-ci to do the work automatically and upload a package with the finished javascript code and the backend code to github.

You can now visit the releases page on github to grab a tarball with all the files you need! Just install all backend dependencies with zef install --deps-only . and run service.p6!

And with that I'm already done for this report!

It looks like the heap snapshot portion of the grant is quite a bit smaller than the profiler part, although a lot of work happened in moarvm rather than the UI. I'm glad to see rapid progress on this.

I hope you enjoyed this quick look at the newest pieces of moarperf!
  - Timo

my Timotimo \this: Progressing with progress.

Published by Timo Paulssen on 2019-09-12T19:50:18

Progressing with progress.

It has been a while since the last progress report, hasn't it?

Over the last few months I've been focusing on the MoarVM Heap Snapshot Profiler. The new format that I explained in the last post, "Intermediate Progress Report: Heap Snapshots", is available in the master branch of MoarVM, and it has learned a few new tricks, too.

The first thing I usually did when opening a Heap Snapshot in the heapanalyzer (the older command-line based one) was to select a Snapshot, ask for the summary, and then for the top objects by size, top objects by count, top frames by size, and/or top frames by count to see if anything immediately catches my eye. In order to make more sense of the results, I would repeat those commands for one or more other Snapshots.

Snapshot  Heap Size          Objects  Type Objects  STables  Frames  References  
========  =================  =======  ============  =======  ======  ==========  
0         46,229,818 bytes   331,212  686           687      1,285   1,146,426   
25        63,471,658 bytes   475,587  995           996      2,832   1,889,612   
50        82,407,275 bytes   625,958  1,320         1,321    6,176   2,741,066   
75        97,860,712 bytes   754,075  1,415         1,416    6,967   3,436,141   
100       113,398,840 bytes  883,405  1,507         1,508    7,837   4,187,184   

Snapshot  Heap Size          Objects    Type Objects  STables  Frames  References  
========  =================  =========  ============  =======  ======  ==========  
125       130,799,241 bytes  1,028,928  1,631         1,632    9,254   5,036,284   
150       145,781,617 bytes  1,155,887  1,684         1,685    9,774   5,809,084   
175       162,018,588 bytes  1,293,439  1,791         1,792    10,887  6,602,449 

Realizing that the most common use case should be simple to achieve, I first implemented a command summary all and later a command summary every 10 to get the heapanalyzer to give the summaries of multiple Snapshots at once, and to be able to get summaries (relatively) quickly even if there's multiple hundreds of snapshots in one file.

Sadly, this still requires the parser to go through the entire file to do the counting and adding up. That's obviously not optimal, even though this is an Embarrassingly Parallel task, and it can use every CPU core in the machine you have, it's still a whole lot of work just for the summary.

For this reason I decided to shift the responsibility for this task to MoarVM itself, to be done while the snapshot is taken. In order to record everything that goes into the Snapshot, MoarVM already differentiates between Object, Type Object, STable, and Frame, and it stores all references anyway. I figured it shouldn't have a performance impact to just add up the numbers and make them available in the file.

The result is that the summary table as shown further above is available only milliseconds after loading the heap snapshot file, rather than after an explicit request and sometimes a lengthy wait period.

The next step was to see if top objects by size and friends could be made faster in a similar way.

I decided that adding an optional "statistics collection" feature inside of MoarVM's heap snapshot profiler would be worthwhile. If it turns out that the performance impact of summing up sizes and counts on a per-type and per-frame basis makes capturing a snapshot too slow, it could be turned off.

Frontend work

> snapshot 50
Loading that snapshot. Carry on...
> top frames by size
Wait a moment, while I finish loading the snapshot...

Name                                  Total Bytes    
====================================  =============  
finish_code_object (World.nqp:2532)   201,960 bytes  
moarop_mapper (QAST.nqp:1764)         136,512 bytes  
!protoregex (QRegex.nqp:1625)         71,760 bytes   
new_type (Metamodel.nqp:1345)         40,704 bytes   
statement (Perl6-Grammar.nqp:951)     35,640 bytes   
termish (Perl6-Grammar.nqp:3641)      34,720 bytes   
<anon> (Perl6-BOOTSTRAP.c.nqp:1382)   29,960 bytes   
EXPR (Perl6-Grammar.nqp:3677)         27,200 bytes   
<mainline> (Perl6-BOOTSTRAP.c.nqp:1)  26,496 bytes   
<mainline> (NQPCORE.setting:1)        25,896 bytes   
EXPR (NQPHLL.nqp:1186)                25,760 bytes   
<anon> (<null>:1)                     25,272 bytes   
declarator (Perl6-Grammar.nqp:2189)   23,520 bytes   
<anon> (<null>:1)                     22,464 bytes   
<anon> (<null>:1)                     22,464 bytes   

Showing the top objects or frame for a single snapshot is fairly straight-forward in the commandline based UI, but how would you display how a type or frame develops its value across many snapshots?

Instead of figuring out the best way to display this data in the commandline, I switched focus to the Moarperf Web Frontend. The most obvious way to display data like this is a Line Graph, I believe. So that's what we have now!

Progressing with progress.

And of course you also get to see the data from each snapshot's Summary in graph format:

Progressing with progress.

And now for the reason behind this blog post's Title.

Progress Notifications

Using Jonathan's module Concurrent::Progress (with a slight modification) I sprinkled the code to parse a snapshot with matching calls of .increment-target and .increment. The resulting progress reports (throttled to at most one per second) are then forwarded to the browser via the WebSocket connection that already delivers many other bits of data.

The result can be seen in this tiny screencast:

Progressing with progress.

The recording is rather choppy because the heapanalyzer code was using every last drop of performance out of my CPU while it was trying to capture my screen.

There's obviously a lot still missing from the heap snapshot analyzer frontend GUI, but I feel like this is a good start, and even provides useful features already. The graphs for the summary data are nicer to read than the table in the commandline UI, and it's only in this UI that you can get a graphical representation of the "highscore" lists.

I think a lot of the remaining features will already be useful after just the initial pieces are in place, so a little work should go a long way.

Bits and Bobs

I didn't spend the whole time between the last progress report and now to work directly on the features shown here. Apart from Life Intervening™, I worked on fixing many frustrating bugs related to both of the profilers in MoarVM. I added a small subsystem I call VMEvents that allows user code to be notified when GC runs happen and other interesting bits from inside MoarVM itself. And of course I've been assisting other developers by answering questions and looking over their contributions. And of course there's the occasional video-game-development related experiment, for example with the GTK Live Coding Tool.

Finally, here's a nice little screencap of that same tool displaying a hilbert curve:

Progressing with progress.

That's already everything I have for this time. A lot has (had to) happen behind the scenes to get to this point, but now there was finally something to look at (and touch, if you grab the source code and go through the needlessly complicated build process yourself).

Thank you for reading and I hope to see you in the next one!
  - Timo

Jo Christian Oterhals: Perl 6 small stuff #21: it’s a date! …or: learn from an overly complex solution to a simple task

Published by Jo Christian Oterhals on 2019-07-31T13:23:17

Perl 6 small stuff #21: it’s a date! …or: learn from an overly complex solution to a simple task

This week’s Perl Weekly Challenge (#19) has two tasks. The first is to find all months with five weekends in the years from 1900 through 2019. The second is to program an implementation of word wrap using the greedy algorithm.

Both are pretty straight-forward tasks, and the solutions to them can (and should) be as well. This time, however, I’m also going to do the opposite and incrementally turn the easy solution into an unnecessarily complex one. Because in this particular case we can learn more by doing things the unnecessarily hard way. So this post will take a look at Dates and date manipulation in Perl 6, using PWC #19 task 1 as an example:

Write a script to display months from the year 1900 to 2019 where you find 5 weekends i.e. 5 Friday, 5 Saturday and 5 Sunday.

Let’s start by finding five-weekend months the easy way:

#!/usr/bin/env perl6
say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1 }, do 1900..2019 X 1,3,5,7,8,10,12;

The algorithm for figuring this out is simple. Given the prerequisite that there must be five occurrences of not only Saturday and Sunday but also Friday, you A) *must* have 31 days to cram five weekends into. And when you know that you’ll also see that B) the last day of the month MUST be a Sunday and C) the first day of the month MUST be a Friday (you don’t have to check for both; if A is true and B is true, C is automatically true too).

The code above implements B and employs a few tricks. You read it from right to left (unless you write it from left to right, like this… say do 1900..2019 X 1,3,5,7,8,10,12 ==> map { Date.new: |$_, 1 } ==> grep *.day-of-week == 5 ==> join “\n”; )

Using the X operator I create a cross product of all the years in the range 1900–2019 and the months 1, 3, 5, 7, 8, 10, 12 (31-day months). In return I get a sequence containing all year-month pairs of the period.

The map function iterates through the Seq. There it instantiates a Date object. A little song and dance is necessary: As Date.new takes three unnamed integer parameters, year, month and day, I have to do something to what I have — a Pair with year and month. I therefore use the | operator to “explode” the pair into two integer parameters for year and month.

You can always use this for calling a sub routine with fixed parameters, using an array with parameter values rather than having separate variables for each parameter. The code below exemplifies usage:

my @list = 1, 2, 3;
sub explode-parameters($one, $two, $three) { 
…do something…
}
#traditional call 
explode-parameters(@list[0], @list[1], @list[2]);
# …or using | 
explode-parameters(|@list);

Back to the business at hand — the .grep filters out the months where the 1st is a Friday, and those are our 5 weekend months. So the output of the one-liner above looks something like this:

...
1997-08-01
1998-05-01
1999-01-01
...

This is a solution as good as any, and if a solution was all we wanted, we could have stopped here. But using this task as an example I want to explore ways to utilise the Date class. Example: The one-liner above does the job, but strictly speaking it doesn’t output the months but the first day of those months. Correcting this is easy, because the Date class supports something called formatters and use the sprintf syntax. To do this you utilise the named parameter “formatter” when instantiating the object.

say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1, formatter => { sprintf "%04d/%02d", .year, .month } }, do 1900..2019 X 1,3,5,7,8,10,12;

Every time a routine pulls a stringified version of the date, the formatter object is invoked. In our case the output has been changed to…

...
1997/08
1998/05
1999/01
...

Formatters are powerful. Look into them.

Now to the overly complex solution. This is the unthinking programmer’s solution, as we don’t suppose anything. The program isn’t told that 5 weekend months only can occur on 31 day months. It doesn’t know that the 1st of such months must be a Friday. All it knows is that if the last day of the month is not Sunday, it figures out the date of the last Sunday (this is not very relevant when counting three-day weekends, but could be if you want to find Saturday+Sunday weekends, or only Sundays).

#!/usr/bin/env perl6
my $format-it = sub ($self) {
sprintf "%04d month %02d", .year, .month given $self;
}
sub MAIN(Int :$from-year = 1900, Int :$to-year where * > $from-year = 2019, Int :$weekend-length where * ~~ 1..3 = 3) {
my $date-loop = Date.new($from-year, 1, 1, formatter => $format-it);
while ($date-loop.year <= $to-year) {
my $date = $date-loop.later(day => $date-loop.days-in-month);
$date = $date.truncated-to('week').pred if $date.day-of-week != 7;
my @weekend = do for 0..^$weekend-length -> $w {
$date.earlier(day => $w).weekday-of-month;
};
say $date-loop if ([+] @weekend) / @weekend == 5;
$date-loop = $date-loop.later(:1month);
}
}

This code can solve the task both for three day weekends, but also for weekends consisting of Saturday + Sunday, as well as only Sundays. You control that with the command line parameter weekend-length=[1..3].

This code finds the last Sunday of each month and counts whether it has occured five times that month. It does the same for Saturday (if weekend-length=2) and Friday (if weekend-length=3). Like this:

my @weekend = do for 0..^$weekend-length -> $w { 
$date.earlier(day => $w).weekday-of-month;
};

The code then calculcates the average weekday-of-month for these three days like this:

say $date-loop if ([+] @weekend) / @weekend == 5;

This line uses the reduction operator [+] on the @weekend list to find the sum of all elements. That sum is divided by the number of elements. If the result is 5, then you have a five day weekend.

As for fun stuff to do with the Date object:

.later(day|month|year => Int) — adds the given number of time units to the current date. There’s also an earlier method for subtracting.

.days-in-months — tells you how many days there are in the current month of the Date object. The value may be 31, 30, 29 (february, leap year) or 28 (february).

.truncated-to(week|month|day|year) — rolls the date back to the first day of the week, month, day or year.

.weekday-of-month — figures out what day of week the current date is and calculates how many of that day there has been so far in that month.

Apart from this you’ll see that I added the formatter in a different way this time. This is probably cleaner looking and easier to maintain.

In the end this post maybe isn’t about dates and date manipulation at all, but rather is a call for all of us to use the documentation even more. It’s often I think that Perl 6 should have a function for x, y or z — .weekday-of-month is one such example — and the documentation tells me that it actually does!

It’s very easy to pick up Perl 6 and program it as you would have programmed Perl 5 or other languages you know well. But the documentation has lots of info of things you didn’t have before and that will make programming easier and more fun when you’ve learnt about them.

I guess you don’t need and excuse to delve into the docs, but if you do the Perl Weekly Challenge is an excellent excuse for spending time in the docs!

my Timotimo \this: A Close Look At Controlling The MoarVM Profiler

Published by Timo Paulssen on 2019-05-22T14:41:13

A Close Look At Controlling The MoarVM Profiler

This is slightly tangential to the Rakudo Perl 6 Performance Analysis Tooling grant from The Perl Foundation, but it does interact closely with it, so this is more or less a progress report.

The other day, Jonathan Worthington of Edument approached me with a little paid Job: Profiling very big codebases can be tedious, especially if you're interested in only one specific fraction of the program. That's why I got the assignment to make the profiler "configurable". On top of that, the Comma IDE will get native access to this functionality.

The actual request was just to allow specifying whether individual routines should be included in the profile via a configuration file. That would have been possible with just a simple text file that contains one filename/line number/routine name per line. However, I have been wanting something in MoarVM that allows much more involved configuration for many different parts of the VM, not just the profiler.

A Close Look At Controlling The MoarVM Profiler
Obligatory cat photo.

That's why I quickly developed a small and simple "domain-specific language" for this and similar purposes.

The language had a few requirements:

There's also some things that aren't necessary:

While thinking about what exactly I should build – before I eventually settled on building a "programming language" for this task – I bounced back and forth between the simplest thing that could possibly work (for example, a text file with a list of file/line/name entries) and the most powerful thing that I can implement in a sensible timeframe (for example, allowing a full NQP script). A very important realization was that as long as I require the first line to identify what "version" of configuration program it is, I could completely throw away the current design and put something else instead, if the need ever arises. That allowed me to actually commit to a design that looked at least somewhat promising. And so I got started on what I call confprog.

Here's an example program. It doesn't do very much, but shows what it's about in general:

version = 1
entry profiler_static:
log = sf.name;
profile = sf.name eq "calculate-strawberries"
profile |= sf.cu.filename eq "CustomCode.pm6"

The entry decides which stage of profiling this program is applied to. In this case, the profiler_static means we're seeing a routine for the first time, before it is actually entered. That's why only the information every individual invocation of the frame in question shares is available via the variable sf, which stands for Static Frame. The Static Frame also allows access to the Compilation Unit (cu) that it was compiled into, which lets us find the filename.

The first line that actually does something assigns a value to the special variable log. This will output the name of the routine the program was invoked for.

The next line will turn on profiling only if the name of the routine is "calculate-strawberries". The line after that will also turn on profiling if the filename the routine is from is "CustomCode.pm6".

Apart from profiler_static, there are a couple more entry points available.

The syntax is still subject to change, especially before the whole thing is actually in a released version of MoarVM.

There is a whole lot of other things I could imagine being of interest in the near or far future. One place I'm taking inspiration from is where "extended Berkeley Packet Filter" (eBPF for short) programs are being used in the linux kernel and related pieces of software:

Oversimplifying a bit, BPF was originally meant for tcpdump so that the kernel doesn't have to copy all data over to the userspace process so that the decision what is interesting or not can be made. Instead, the kernel receives a little piece of code in the special BPF language (or bytecode) and can calculate the decision before having to copy anything.

eBPF programs can now also be used as a complex ruleset for sandboxing processes (with "seccomp"), to decide how network packets are to be routed between sockets (that's probably for Software Defined Networks?), what operations a process may perform on a particular device, whether a trace point in the kernel or a user-space program should fire, and so on.

So what's the status of confprog? I've written a parser and compiler that feeds confprog "bytecode" (which is mostly the same as regular moarvm bytecode) to MoarVM. There's also a preliminary validator that ensures the program won't do anything weird, or crash, when run. It is much too lenient at the moment, though. Then there's an interpreter that actually runs the code. It can already take an initial value for the "decision output value" (great name, isn't it) and it will return whatever value the confprog has set when it runs. The heap snapshot profiler is currently the only part of MoarVM that will actually try to run a confprog, and it uses the value to decide whether to take a snapshot or not.

Next up on the list of things to work on:

Apart from improvements to the confprog programming language, the integration with MoarVM lacks almost everything, most importantly installing a confprog for the profiler to decide whether a frame should be profiled (which was the original purpose of this assignment).

After that, and after building a bit of GUI for Comma, the regular grant work can resume: Flame graphs are still not visible on the call graph explorer page, and heap snapshots can't be loaded into moarperf yet, either.

Thanks for sticking with me through this perhaps a little dry and technical post. I hope the next one will have a little more excitement! And if there's interest (which you can signal by sending me a message on irc, or posting on reddit, or reaching me via twitter @loltimo, on the Perl 6 discord server etc) I can also write a post on how exactly the compiler was made, and how you can build your own compiler with Perl 6 code. Until then, you can find Andrew Shitov's presentations about making tiny languages in Perl 6 on youtube.

I hope you have a wonderful day; see you in the next one!
  - Timo

PS: I would like to give a special shout-out to Nadim Khemir for the wonderful Data::Dump::Tree module which made it much easier to see what my parser was doing. Here's some example output from another simple confprog program:

[6] @0
0 = .Label .Node @1
│ ├ $.name = heapsnapshot.Str
│ ├ $.type = entrypoint.Str
│ ├ $.position is rw = Nil
│ └ @.children = [0] @2
1 = .Op .Node @3
│ ├ $.op = =.Str
│ ├ $.type is rw = Nil
│ └ @.children = [2] @4
│   ├ 0 = .Var .Node @5
│   │ ├ $.name = log.Str
│   │ └ $.type = CPType String :stringy  @6
│   └ 1 = String Value ("we're in") @7
2 = .Op .Node @8
│ ├ $.op = =.Str
│ ├ $.type is rw = Nil
│ └ @.children = [2] @9
│   ├ 0 = .Var .Node @10
│   │ ├ $.name = log.Str
│   │ └ $.type = CPType String :stringy  §6
│   └ 1 = .Op .Node @12
│     ├ $.op = getattr.Str
│     ├ $.type is rw = CPType String :stringy  §6
│     └ @.children = [2] @14
│       ├ 0 = .Var .Node @15
│       │ ├ $.name = sf.Str
│       │ └ $.type = CPType MVMStaticFrame  @16
│       └ 1 = name.Str
3 = .Op .Node @17
│ ├ $.op = =.Str
│ ├ $.type is rw = Nil
│ └ @.children = [2] @18
│   ├ 0 = .Var .Node @19
│   │ ├ $.name = log.Str
│   │ └ $.type = CPType String :stringy  §6
│   └ 1 = String Value ("i am the confprog and i say:") @21
4 = .Op .Node @22
│ ├ $.op = =.Str
│ ├ $.type is rw = Nil
│ └ @.children = [2] @23
│   ├ 0 = .Var .Node @24
│   │ ├ $.name = log.Str
│   │ └ $.type = CPType String :stringy  §6
│   └ 1 = String Value ("  no heap snapshots for you my friend!") @26
5 = .Op .Node @27
$.op = =.Str
$.type is rw = Nil
@.children = [2] @28
0 = .Var .Node @29
    │ ├ $.name = snapshot.Str
    │ └ $.type = CPType Int :numeric :stringy  @30
1 = Int Value (0) @31

Notice how it shows the type of most things, like name.Str, as well as cross-references for things that appear multiple times, like the CPType String. Particularly useful is giving your own classes methods that specify exactly how they should be displayed by DDT. Love It!

rakudo.org: Rakudo Star Release 2019.03

Published on 2019-03-30T00:00:00

brrt to the future: Reverse Linear Scan Allocation is probably a good idea

Published by Bart Wiegmans on 2019-03-21T15:52:00

Hi hackers! Today First of all, I want to thank everybody who gave such useful feedback on my last post.  For instance, I found out that the similarity between the expression JIT IR and the Testarossa Trees IR is quite remarkable, and that they have a fix for the problem that is quite different from what I had in mind.

Today I want to write something about register allocation, however. Register allocation is probably not my favorite problem, on account of being both messy and thankless. It is a messy problem because - aside from being NP-hard to solve optimally - hardware instruction sets and software ABI's introduce all sorts of annoying constraints. And it is a thankless problem because the case in which a good register allocator is useful - for instance, when there's lots of intermediate values used over a long stretch of code - are fairly rare. Much more common are the cases in which either there are trivially sufficient registers, or ABI constraints force a spill to memory anyway (e.g. when calling a function, almost all registers can be overwritten).

So, on account of this being not my favorite problem, and also because I promised to implement optimizations in the register allocator, I've been researching if there is a way to do better. And what better place to look than one of the fastest dynamic language implementations arround, LuaJIT? So that's what I did, and this post is about what I learned from that.

Truth be told, LuaJIT is not at all a learners' codebase (and I don't think it's author would claim this). It uses a rather terse style of C and lots and lots of preprocessor macros. I had somewhat gotten used to the style from hacking dynasm though, so that wasn't so bad. What was more surprising is that some of the steps in code generation that are distinct and separate in the MoarVM JIT - instruction selection, register allocation and emitting bytecode - were all blended together in LuaJIT. Over multiple backend architectures, too. And what's more - all these steps were done in reverse order - from the end of the program (trace) to the beginning. Now that's interesting...

I have no intention of combining all phases of code generation like LuaJIT has. But processing the IR in reverse seems to have some interesting properties. To understand why that is, I'll first have to explain how linear scan allocation currently works in MoarVM, and is most commonly described:

  1. First, the live ranges of program values are computed. Like the name indicates, these represent the range of the program code in which a value is both defined and may be used. Note that for the purpose of register allocation, the notion of a value shifts somewhat. In the expression DAG IR, a value is the result of a single computation. But for the purposes of register allocation, a value includes all its copies, as well as values computed from different conditional branches. This is necessary because when we actually start allocating registers, we need to know when a value is no longer in use (so we can reuse the register) and how long a value will remain in use -
  2. Because a value may be computed from distinct conditional branches, it is necessary to compute the holes in the live ranges. Holes exists because if a value is defined in both sides of a conditional branch, the range will cover both the earlier (in code order) branch and the later branch - but from the start of the later branch to its definition that value doesn't actually exist. We need this information to prevent the register allocator from trying to spill-and-load a nonexistent value, for instance.
  3. Only then can we allocate and assign the actual registers to instructions. Because we might have to spill values to memory, and because values now can have multiple definitions, this is a somewhat subtle problem. Also, we'll have to resolve all architecture specific register requirements in this step.
In the MoarVM register allocator, there's a fourth step and a fifth step. The fourth step exists to ensure that instructions conform to x86 two-operand form (Rather than return the result of an instruction in a third register, x86 reuses one of the input registers as the output register. E.g. all operators are of the form a = op(a, b)  rather than a = op(b, c). This saves on instruction encoding space). The fifth step inserts instructions that are introduced by the third step; this is done so that each instruction has a fixed address in the stream while the stream is being processed.

Altogether this is quite a bit of complexity and work, even for what is arguably the simplest correct global register allocation algorithm. So when I started thinking of the reverse linear scan algorithm employed by LuaJIT, the advantages became clear:
There are downsides as well, of course. Not knowing exactly how long a value will be live while processing it may cause the algorithm to make worse choices in which values to spill. But I don't think that's really a great concern, since figuring out the best possible value is practically impossible anyway, and the most commonly cited heuristic - evict the value that is live furthest in the future, because this will release a register over a longer range of code, reducing the chance that we'll need to evict again - is still available. (After all, we do always know the last use, even if we don't necessarily know the first definition).

Altogether, I'm quite excited about this algorithm; I think it will be a real simplification over the current implementation. Whether that will work out remains to be seen of course. I'll let you know!

brrt to the future: Something about IR optimization

Published by Bart Wiegmans on 2019-03-17T06:23:00

Hi hackers! Today I want to write about optimizing IR in the MoarVM JIT, and also a little bit about IR design itself.

One of the (major) design goals for the expression JIT was to have the ability to optimize code over the boundaries of individual MoarVM instructions. To enable this, the expression JIT first expands each VM instruction into a graph of lower-level operators. Optimization then means pattern-matching those graphs and replacing them with more efficient expressions.

As a running example, consider the idx operator. This operator takes two inputs (base and element) and a constant parameter scale and computes base+element*scale. This represents one of the operands of an  'indexed load' instruction on x86, typically used to process arrays. Such instructions allow one instruction to be used for what would otherwise be two operations (computing an address and loading a value). However, if the element of the idx operator is a constant, we can replace it instead with the addr instruction, which just adds a constant to a pointer. This is an improvement over idx because we no longer need to load the value of element into a register. This saves both an instruction and valuable register space.

Unfortunately this optimization introduces a bug. (Or, depending on your point of view, brings an existing bug out into the open). The expression JIT code generation process selects instructions for subtrees (tile) of the graph in a bottom-up fashion. These instructions represent the value computed or work performed by that subgraph. (For instance, a tree like (load (addr ? 8) 8) becomes mov ?, qword [?+8]; the question marks are filled in during register allocation). Because an instruction is always represents a tree, and because the graph is an arbitrary directed acyclic graph, the code generator projects that graph as a tree by visiting each operator node only once. So each value is computed once, and that computed value is reused by all later references.

It is worth going into some detail into why the expression graph is not a tree. Aside from transformations that might be introduced by optimizations (e.g. common subexpression elimination), a template may introduce a value that has multiple references via the let: pseudo-operator. See for instance the following (simplified) template:

(let: (($foo (load (local))))
    (add $foo (sub $foo (const 1))))

Both ADD and SUB refer to the same LOAD node


In this case, both references to $foo point directly to the same load operator. Thus, the graph is not a tree. Another case in which this occurs is during linking of templates into the graph. The output of an instruction is used, if possible, directly as the input for another instruction. (This is the primary way that the expression JIT can get rid of unnecessary memory operations). But there can be multiple instructions that use a value, in which case an operator can have multiple references. Finally, instruction operands are inserted by the compiler and these can have multiple references as well.

If each operator is visited only once during code generation, then this may introduce a problem when combined with another feature - conditional expressions. For instance, if two branches of a conditional expression both refer to the same value (represented by name $foo) then the code generator will only emit code to compute its value when it encounters the first reference. When the code generator encounters $foo for the second time in the other branch, no code will be emitted. This means that in the second branch, $foo will effectively have no defined value (because the code in the first branch is never executed), and wrong values or memory corruption is then the predictable result.

This bug has always existed for as long as the expression JIT has been under development, and in the past the solution has been not to write templates which have this problem. This is made a little easier by a feature the let: operator, in that it inserts a do operator which orders the values that are declared to be computed before the code that references them. So that this is in fact non-buggy:

(let: (($foo (load (local))) # code to compute $foo is emitted here
  (if (...)  
    (add $foo (const 1)) # $foo is just a reference
    (sub $foo (const 2)) # and here as well

The DO node is inserted for the LET operator. It ensures that the value of the LOAD node is computed before the reference in either branch


Alternatively, if a value $foo is used in the condition of the if operator, you can also be sure that it is available in both sides of the condition.

All these methods rely on the programmer being able to predict when a value will be first referenced and hence evaluated. An optimizer breaks this by design. This means that if I want the JIT optimizer to be successful, my options are:

  1. Fix the optimizer so as to not remove references that are critical for the correctness of the program
  2. Modify the input tree so that such references are either copied or moved forward
  3. Fix the code generator to emit code for a value, if it determines that an earlier reference is not available from the current block.
In other words, I first need to decide where this bug really belongs - in the optimizer, the code generator, or even the IR structure itself. The weakness of the expression IR is that expressions don't really impose a particular order. (This is unlike the spesh IR, which is instruction-based, and in which every instruction has a 'previous' and 'next' pointer). Thus, there really isn't a 'first' reference to a value, before the code generator introduces the concept. This is property is in fact quite handy for optimization (for instance, we can evaluate operands in whatever order is best, rather than being fixed by the input order) - so I'd really like to preserve it. But it also means that the property we're interested in - a value is computed before it is used in, in all possible code flow paths - isn't really expressible by the IR. And there is no obvious local invariant that can be maintained to ensure that this bug does not happen, so any correctness check may have to check the entire graph, which is quite impractical.

I hope this post explains why this is such a tricky problem! I have some ideas for how to get out of this, but I'll reserve those for a later post, since this one has gotten quite long enough. Until next time!

brrt to the future: A short post about types and polymorphism

Published by Bart Wiegmans on 2019-01-14T13:34:00

Hi all. I usually write somewhat long-winded posts, but today I'm going to try and make an exception. Today I want to talk about the expression template language used to map the high-level MoarVM instructions to low-level constructs that the JIT compiler can easily work with:

This 'language' was designed back in 2015 subject to three constraints:
Recently I've been working on adding support for floating point operations, and  this means working on the type system of the expression language. Because floating point instructions operate on a distinct set of registers from integer instructions, a floating point operator is not interchangeable with an integer (or pointer) operator.

This type system is enforced in two ways. First, by the template compiler, which attempts to check if you've used all operands correctly. This operates during development, which is convenient. Second, by instruction selection, as there will simply not be any instructions available that have the wrong combinations of types. Unfortunately, that happens at runtime, and such errors so annoying to debug that it motivated the development of the first type checker.

However, this presents two problems. One of the advantages of the expression IR is that, by virtue of having a small number of operators, it is fairly easy to analyze. Having a distinct set of operators for each type would undo that. But more importantly, there are several MoarVM instructions that are generic, i.e. that operate on integer, floating point, and pointer values. (For example, the set, getlex and bindlex instructions are generic in this way). This makes it impossible to know whether its values will be integers, pointers, or floats.

This is no problem for the interpreter since it can treat values as bags-of-bits (i.e., it can simply copy the union MVMRegister type that holds all values of all supported types). But the expression JIT works differently - it assumes that it can place any value in a register, and that it can reorder and potentially skip storing them to memory. (This saves work when the value would soon be overwritten anyway). So we need to know what register class that is, and we need to have the correct operators to manipulate a value in the right register class.

To summarize, the problem is:
There are two ways around this, and I chose to use both. First, we know as a fact for each local or lexical value in a MoarVM frame (subroutine) what type it should have. So even a generic operator like set can be resolved to a specific type at runtime, at which point we can select the correct operators. Second, we can introduce generic operators of our own. This is possible so long as we can select the correct instruction for an operator based on the types of the operands.

For instance, the store operator takes two operands, an address and a value. Depending on the type of the value (reg or num), we can always select the correct instruction (mov or movsd). It is however not possible to select different instructions for the load operator based on the type required, because instruction selection works from the bottom up. So we need a special load_num operator, but a store_num operator is not necessary. And this is true for a lot more operators than I had initially thought. For instance, aside from the (naturally generic) do and if operators, all arithmetic operators and comparison operators are generic in this way.

I realize that, despite my best efforts, this has become a rather long-winded post anyway.....

Anyway. For the next week, I'll be taking a slight detour, and I aim to generalize the two-operand form conversion that is necessary on x86. I'll try to write a blog about it as well, and maybe it'll be short and to the point. See you later!

brrt to the future: New years post

Published by Bart Wiegmans on 2019-01-06T13:15:00

Hi everybody! I recently read jnthns Perl 6 new years resolutions post, and I realized that this was an excellent example to emulate. So here I will attempt to share what I've been doing in 2018 and what I'll be doing in 2019.

In 2018, aside from the usual refactoring, bugfixing and the like:
So 2019 starts with me trying to complete the goals specified in that grant request. I've already partially completed one goal (as explained in the interim report) - ensuring that register encoding works correctly for SSE registers in DynASM. Next up is actually ensuring support for SSE (and floating point) registers in the JIT, which is surprisingly tricky, because it means introducing a type system where there wasn't really one previously. I will have more to report on that in the near future.

After that, the first thing on my list is the support for irregular register requirements in the register allocator, which should open up the possibility of supporting various instructions.

I guess that's all for now. Speak you later!

6guts: My Perl 6 wishes for 2019

Published by jnthnwrthngtn on 2019-01-02T01:35:51

This evening, I enjoyed the New Year’s fireworks display over the beautiful Prague skyline. Well, the bit of it that was between me and the fireworks, anyway. Rather than having its fireworks display at midnight, Prague puts it at 6pm on New Year’s Day. That makes it easy for families to go to, which is rather thoughtful. It’s also, for those of us with plans to dig back into work the following day, a nice end to the festive break.

Prague fireworks over Narodni Divadlo

So, tomorrow I’ll be digging back into work, which of late has involved a lot of Perl 6. Having spent so many years working on Perl 6 compiler and runtime design and implementation, it’s been fun to spend a good amount of 2018 using Perl 6 for commercial projects. I’m hopeful that will continue into 2019. Of course, I’ll be continuing to work on plenty of Perl 6 things that are for the good of all Perl 6 users too. In this post, I’d like to share some of the things I’m hoping to work on or achieve during 2019.

Partial Escape Analysis and related optimizations in MoarVM

The MoarVM specializer learned plenty of new tricks this year, delivering some nice speedups for many Perl 6 programs. Many of my performance improvement hopes for 2019 center around escape analysis and optimizations stemming from it.

The idea is to analyze object allocations, and find pieces of the program where we can fully understand all of the references that exist to the object. The points at which we can cease to do that is where an object escapes. In the best cases, an object never escapes; in other cases, there are a number of reads and writes performed to its attributes up until its escape.

Armed with this, we can perform scalar replacement, which involves placing the attributes of the object into local registers up until the escape point, if any. As well as reducing memory operations, this means we can often prove significantly more program properties, allowing further optimization (such as getting rid of dynamic type checks). In some cases, we might never need to allocate the object at all; this should be a big win for Perl 6, which by its design creates lots of short-lived objects.

There will be various code-generation and static optimizer improvements to be done in Rakudo in support of this work also, which should result in its own set of speedups.

Expect to hear plenty about this in my posts here in the year ahead.

Decreasing startup time and base memory use

The current Rakudo startup time is quite high. I’d really like to see it fall to around half of what it currently is during 2019. I’ve got some concrete ideas on how that can be achieved, including changing the way we store and deserialize NFAs used by the parser, and perhaps also dealing with the way we currently handle method caches to have less startup impact.

Both of these should also decrease the base memory use, which is also a good bit higher than I wish.

Improving compilation times

Some folks – myself included – are developing increasingly large applications in Perl 6. For the current major project I’m working on, runtime performance is not an issue by now, but I certainly feel myself waiting a bit on compiles. Part of it is parse performance, and I’d like to look at that; in doing so, I’d also be able to speed up handling of all Perl 6 grammars.

I suspect there are some good wins to be had elsewhere in the compilation pipeline too, and the startup time improvements described above should also help, especially when we pre-compile deep dependency trees. I’d also like to look into if we can do some speculative parallel compilation.

Research into concurrency safety

In Perl 6.d, we got non-blocking await and react support, which greatly improved the scalability of Perl 6 concurrent and parallel programs. Now many thousands of outstanding tasks can be juggled across just a handful of threads (the exact number chosen according to demand and CPU count).

For Perl 6.e, which is still a good way off, I’d like to having something to offer in terms of making Perl 6 concurrent and parallel programming safer. While we have a number of higher-level constructs that eliminate various ways to make mistakes, it’s still possible to get into trouble and have races when using them.

So, I plan to spend some time this year quietly exploring and prototyping in this space. Obviously, I want something that fits in with the Perl 6 language design, and that catches real and interesting bugs – probably by making things that are liable to occasionally explode in weird ways instead reliably do so in helpful ways, such that they show up reliably in tests.

Get Cro to its 1.0 release

In the 16 months since I revealed it, Cro has become a popular choice for implementing HTTP APIs and web applications in Perl 6. It has also attracted code contributions from a couple of dozen contributors. This year, I aim to see Cro through to its 1.0 release. That will include (to save you following the roadmap link):

Comma Community, and lots of improvements and features

I founded Comma IDE in order to bring Perl 6 a powerful Integrated Development Environment. We’ve come a long way since the Minimum Viable Product we shipped back in June to the first subscribers to the Comma Supporter Program. In recent months, I’ve used Comma almost daily on my various Perl 6 projects, and by this point honestly wouldn’t want to be without it. Like Cro, I built Comma because it’s a product I wanted to use, which I think is a good place to be in when building any product.

In a few months time, we expect to start offering Comma Community and Comma Complete. The former will be free of charge, and the latter a commercial offering under a subscribe-for-updates model (just like how the supporter program has worked so far). My own Comma wishlist is lengthy enough to keep us busy for a lot more than the next year, and that’s before considering things Comma users are asking for. Expect plenty of exciting new features, as well as ongoing tweaks to make each release feel that little bit nicer to use.

Speaking, conferences, workshops, etc.

This year will see me giving my first keynote at a European Perl Conference. I’m looking forward to being in Riga again; it’s a lovely city to wander around, and I remember having some pretty nice food there too. The keynote will focus on the concurrent and parallel aspects of Perl 6; thankfully, I’ve still a good six months to figure out exactly what angle I wish to take on that, having spoken on the topic many times before!

I also plan to submit a talk or two for the German Perl Workshop, and will probably find the Swiss Perl Workshop hard to resist attending once more. And, more locally, I’d like to try and track down other Perl folks here in Prague, and see if I can help some kind of Praha.pm to happen again.

I need to keep my travel down to sensible levels, but might be able to fit in the odd other bit of speaking during the year, if it’s not too far away.

Teaching

While I want to spend most of my time building stuff rather than talking about it, I’m up for the occasional bit of teaching. I’m considering pitching a 1-day Perl 6 concurrency workshop to the Riga organizers. Then we’ll see if there’s enough folks interested in taking it. It’ll cost something, but probably much less than any other way of getting a day of teaching from me. :-)

So, down to work!

Well, a good night’s sleep first. :-) But tomorrow, another year of fun begins. I’m looking forward to it, and to working alongside many wonderful folks in the Perl community.

rakudo.org: Rakudo Star Release 2018.10

Published on 2018-11-11T00:00:00