Planet Raku

Raku RSS Feeds

Roman Baumer (Freenode: rba #raku or ##raku-infra) / 2020-08-11T03:21:16


Raku Advent Calendar: RFC 54, by Damian Conway: Operators: Polymorphic comparisons

Published by p6steve on 2020-08-11T02:01:00

This RFC was originally proposed on August 7th 2020 and frozen in six weeks.

It described a frustration with comparison operations in Perl. The expression:

"cat" == "dog"   #True

Perl (and now Raku) has excellent support for generic programming because of dynamic typing, generic data types and interface polymorphism. Just about the only place where that DWIM genericity breaks down is in comparisons. There is no generic way to specify an ordering on dynamically typed data. For example, one cannot simply code a generic BST insertion method. Using <=> for the comparison will work well for numeric keys, but fails miserably on most string-based keys because <=> will generally return 0 for most pairs of strings. The above code would work correctly however if <=> detected the string/string comparison and automagically used cmp instead.

The Raku implementation is as follows:

There are three built-in comparison operators that can be used for sorting. They are sometimes called three-way comparators because they compare their operands and return a value meaning that the first operand should be considered less than, equal to or more than the second operand for the purpose of determining in which order these operands should be sorted. The leg operator coerces its arguments to strings and performs a lexicographic comparison. The <=> operator coerces its arguments to numbers (Real) and does a numeric comparison. The aforementioned cmp operator is the “smart” three-way comparator, which compares strings with string semantics and numbers with number semantics.[1]

The allomorph types IntStr, NumStr, RatStr and ComplexStr may be created as a result of parsing a string quoted with angle brackets…

my $f = <42.1>; say $f.^name; # OUTPUT: «RatStr␤»

Returns either Order::Less, Order::Same or Order::More object. Compares Pair objects first by key and then by value etc.

Evaluates Lists by comparing element @a[$i] with @b[$i] (for some Int $i, beginning at 0) and returning Order::Less, Order::Same, or Order::More depending on if and how the values differ. If the operation evaluates to Order::Same, @a[$i + 1] is compared with @b[$i + 1]. This is repeated until one is greater than the other or all elements are exhausted. If the Lists are of different lengths, at most only $n comparisons will be made (where `$n = @a.elems min @b.elems`). If all of those comparisons evaluate to Order::Same, the final value is selected based upon which List is longer.

If $a eqv $b, then $a cmp $b always returns Order::Same. Keep in mind that certain constructs, such as Sets, Bags, and Mixes care about object identity, and so will not accept an allomorph as equivalent of its components alone.

Now, we can leverage the Raku sort syntax to suit our needs:

my @sorted = sort { $^a cmp $^b }, @values;
my @sorted = sort -> $a, $b { $a cmp $b }, @values;
my @sorted = sort * cmp *, @values;
my @sorted = sort &infix:cmp», @values;

And neatly avoid duplication in a functional style:

# sort case-insensitively
say sort { $^a.lc cmp $^b.lc }, @words;
#          ^^^^^^     ^^^^^^  code duplication
# sort case-insensitively
say sort { .lc }, @words;

So this solution to RFC54 smoothly combines many of the individual capabilities of Raku – classes, allomorphs, dynamic typing, interface polymorphism, and functional programming to produce a set of practical solutions to suit your coding style.

[1] This is ass-backwards from the RFC54 request with ‘cmp’ as the dynamic “apex”, degenerating to ‘<=>’ for Numeric or ‘leg’ for Lexicographic variants.

Rakudo Weekly News: 2020.32 Survey, Please

Published by liztormato on 2020-08-10T12:29:20

The TPF Marketing Committee wants to learn more about how you perceive “The Perl Foundation” itself, and asks you to fill in this survey (/r/rakulang, /r/perl comments). Thank you!

More RFC’s taken apart

Like clockwork, the RFC 20th anniversary blog posts have been appearing:

Looking forward to another 10 to come! If you’re interested in writing one, there are still some open slots available!

Memories of Rīga

Andrew Shitov has posted some pictures with memories of Rīga (/r/rakulang, /r/perl comments).

Just the One

Wenzel P. P. Peppmeyer is still at about 3 blog posts a week on average, but this week only saw one: Whereceptions.

Weekly Challenge

The entries for Challenge #72 that have Raku solutions:

Andrew Shitov reviewed all of the Raku solutions of Challenge #71. The Weekly Challenge #73 is up for your perusal!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

Looks like the heat is on! Causing only a little in core development, but all the more in blog posting! And yours truly keeps repeating: don’t forget to stay healthy and to stay safe. Expecting to see you all healthy next week!

Raku Advent Calendar: RFC 64: New pragma ‘scope’ to change Perl’s default scoping

Published by Moritz on 2020-08-10T00:01:02

Let’s talk about a fun RFC that mostly did not make its way into current day Raku, nor is it planned for later implementation.

This is about RFC 64 by Nathan Wiger. Let me quote the abstract:

Historically, Perl has had the default “everything’s global” scope. This means that you must explicitly define every variable with my, our, or its absolute package to get better scoping. You can ‘use strict’ to force this behavior, but this makes easy things harder, and doesn’t fix the actual problem.

Those who don’t learn from history are doomed to repeat it. Let’s fix this.

It seems use strict; simply has won, despite Nathan Wiger’s dislike for it.

Raku enables it by default, even for one-liners with the -e option, Perl 5 enables it with use 5.012 and later versions, and these days there’s even talk to enable it by default in version 7 or 8.

I’d say the industry as a whole has moved in the direction of accepting the tiny inconvenience of having to declare variables over the massive benefit in safety and protection against typos and other errors. Hey, even javascript got a "use strict" and TypeScript enables it by default in modules. PHP 7 also got something comparable. The only holdout in the “no strict” realm seems to be python.

With pretty much universal acceptance of required variable definitions, there was no compelling reason to improve implicit declaration, so no scope pragma was needed.

But… there’s always a “but”, isn’t there?

One of the primary motivations for not wanting to declare variables was laziness, and Raku did introduce several features that allow to you avoid some declarations:

I am glad Raku requires variable declarations by default, and haven’t seen any code in the wild that explicitly states no strict;. And without declarations, where would you even put the type constraint?

gfldex: Whereceptions

Published by gfldex on 2020-08-09T21:51:36

I have a sub that takes a file and tries to guard itself against a file that does not exist. Where clauses don’t make good error messages.

sub run-test(IO() $file where .e & .f) { };
run-test('not-there.txt');
# OUTPUT:
# Constraint type check failed in binding to parameter '$file'; expected anonymous constraint to be met but got IO::Path (IO::Path.new("not-th...)

The signature of that sub is quite expressive. Often we don’t have time to read code to hunt down mistakes. That’s why bad error messages are LTA. We can extend any where clause with a die or fail to provide a better message.

sub run-test(IO() $file where .e && .f || fail("file $_ not found")) { };
run-test('not-there.txt');
# OUTPUT:
# file not-there.txt not found

We can also throw exceptions of cause. With just one parameter the signature has gotten quite long already. Also, when working with many files we will write the same code over and over again. Since we don’t code in Java we would rather not to.

my &it-is-a-file = -> IO() $_ {
    .e && .f || fail (X::IO::FileNotFound.new(:path(.Str)))
}
sub run-test(IO::Path $file where &it-is-a-file) { }

That’s much better. Since exceptions are classes we can reuse code amongst them. Like coloured output and checking for broken symlinks.

class X::Whereception is Exception is export {
    has $.stale-symlink is rw;
    method is-dangling-symlink {
        $.stale-symlink = do with $.path { .IO.l & !.IO.e };
    }
}

class X::IO::FileNotFound is X::Whereception is export {
    has $.path;
    method message {
        RED $.is-dangling-symlink ?? „The file ⟨$.path⟩ is a dangling symlink.“ !! „The file ⟨$.path⟩ was not found.“
    }
}

I have added a few to Shell::Piping. Suggestions what else to check for are very welcome.

Raku Advent Calendar: RFC 5, by Michael J. Mathews: Multiline comments

Published by Antonio Gámiz Delgado on 2020-08-09T00:01:00

This is the first RFC proposed related to documentation. It asks for a common feature in most of the modern programming languages: multiline comments.

The problem of not having multi-line comments is quite obvious: if you need to comment a large chunk of code, you need to manually insert a # symbol at the beginning of every line (in Raku). This can be incredibly tedious if you do not have, for instance, a text editor to do this with a shortcut or similar. This practice is very common in large code bases. For that reason, Michael refers to C++ and Java as

popular languages that were designed from the outset as being useful for large projects, implementing both single line and multiline comments

In those languages you can type comments as follows:

// single line of code

/*
 Several lines of code
*/

But, in addition, in Java you have a special multiline comment syntax 1 for writing documentation:

/**
* Here you can write the doc!
*
*/

A lot of people proposed POD as a solution to this problem, but Michael lists some inconvenients:

From my point of view, this not as big a problem since POD6 syntax is quite simple and it’s well documented. In addition, it is quite intuitive for newcomers: if you want a header, you use =head1, if you want italics, you use I<> and so on.

After some discussion, Perl chose POD for implementing multiline comments. Nonetheless, Michael proposal was taken into account and Raku supports multiline comments similar to those of C++ and Java, but with a slightly different syntax:

#`[
Raku is a large-project-friendly
language too!
]
say ":D";

And as a curiosity, Raku has embedded comments, that is:

if #`( embedded comment ) True {
    say "Raku is awesome";
}

In the end, as a modern, 100-year language, Raku gives you more than one way to do it, so choose whatever fits you best!


  1. It’s not really a multiline comment because you also need to type the * symbol at the beginning of every line.↩

Raku Advent Calendar: RFC 225: Superpositions (aka Junctions)

Published by Matthew Stephen Stuckwisch on 2020-08-08T00:01:00

Superpositioned Cat and Person

Damian Conway is one of those names in the Perl and Raku world that almost doesn’t need explaining. He is one of the most prolific contributors to CPAN and was foundational in the design of Raku (then Perl 6). One of his more interesting proposals came in RFC225 on Superpositions, which suggested making his Perl Quantum::Superposition‘s features available in the core of the language.

What is a Superposition?¹

In the quantum world, there are measurable things that can exist in multiple states — simultaneously — until the point in time in which we measure them. For computer scientists, perhaps the most salient application of this is in qubits which, as a core element of quantum computing, threaten to destroy encryption as we know it, if quantum supremacy is borne out.

At the end of the day, though, for us it means being able to treat multiple values as if they were a single value so long as never actually need there to only be one, at which point we get a single state from them.

The Perl Implementation

In the original implementation, Dr. Conway adds two new operators, all and any. These converted a list of values into a single scalar value. How was this different from using a list or array? Consider the following Perl/Raku code:

my @foo = (0, 1, 2, 3, 4, 5);

We can easily access each of the values by using array notation:

print @names[0]; # 0
print @names[1]; # 1
print @names[2]; # 2

But what if we wanted to do stuff to this list of numbers? That’s a bit trickier. Functional programmers would probably say “But you have map!”. That’s true, of course. If I wanted to double everything, I could say

@foo = map { $_ * 2}  @foo; # Perl
@foo = map { $_ * 2}, @foo; # Raku

But it could also be nice if I could just say

@foo *= 2;

This is where the superposition can be helpful. Now imagine we have another array and wanted to add it to our now doubled set of values in @foo

my @bar = (0,20,40,60,80,100);
@foobar = @foo + @bar;          # (12); wait what?  Recall that arrays in numeric context are the number of elements, or 6 here.

Your instinctive reaction might be to say that we’d want to end up with (0,22,44,66,88,110) which is simple enough to handle in a basic map or for loop (using the zip operator, Raku can do this simply as @foo Z+ @bar). But remember what a superposition means: anything done happens to all the values, so each value in @foo needs to be handled with each value in @bar, which requires at least two loops if done via map or for (the cross operator in Raku can do this simply as @foo X+ @bar). We actually want (0, 2, 4, 6, 8, 10, 20, 22, 24, 26, 28, 30, 40, 42, 44, 46, 48, 50 … ). More difficult, then, would be to somehow compare this value:

@foobar > 10;

There is no map method we can attach to @foobar to check its values against 10, we’d need to instead map the > 10 into @foobar. But by using superpositioning, we can painless do all of the above with a single use of map, for, or anything else that generates line noise:

use Quantum::Superposition;
my $foo = any (0, 1, 2, 3, 4, 5);    # superposition of 0..5
$foo *= 2;                           # superposition of 0,2,4,6,8,10
my $bar = any (0,20,40,60,80,100);
my $foobar = $foo + $bar;            # superposition of 0,2,4,6,8,20,22,24,26…
$foobar > 10;                        # True
$foobar > 200;                       # False
$foobar < 50;                        # True
$foobar < 0;                         # False

In fact, comparison operators are where the power of superpositions really shine. Instead of checking if a string response is an an array of acceptable responses, or using a hash

The Raku proposal

In the original proposal, there were two types of superpositions possible: all and any. These were proposed to work exactly as described above (creating a single scalar value out of a list of values), with their most useful feature being evident when interpreted in a boolean context. For example, in the code

my $numbers = all 1,3,5,7,9;
say "Tiny"  if $numbers < 10;     # Tiny
say "Prime" if $numbers.is-prime; # (no output)

For those wishing to obtain the values, he proposed the using the sub eigenstates, which would retrieve the states without forcing it to collapse to a single one. The rest of the RFC argues why superpositions should not be left in module space, as even the Dr. Conway’s work had limitations that he himself readily admitted — namely, interacting with everything that assumes a single value for a scalar and (auto)threading. The former should be fairly obvious why it would be difficult for the Quantum::Superposition module to work perfectly outside of core, because “the use of superpositions changes the nature of subroutine and operator invocations that have superpositions as arguments”.² As well, if we had a superposition of a million values, doing each operation one by one on computers with multiple processors seems silly: it should be possible to take advantage of the multiple processors. While this seems like an obvious proposition today, we must recall the multicore processors were simply not common in the consumer market when the proposal was made. (Intel’s Pentium D chips didn’t arrive until 2005, IBM’s PowerPC970 MP in 2002.) By placing it in core, things can just work as intended and, in the rare event that a module author cares about receiving superimposed values, they could provide special support.

The Raku implementation

For the most part, RFC 225 was well received and expanded in scope. The most obvious change is the name. In the final implementation, Raku calls these superimposed values junctions. But on a practical level, two additional keywords were added, none and one which provide more options to those using the junctions.³A wildly different — and useful — option was added to provide syntax to create the junctions. Instead of using any 1,2,3, one can also write 1 | 2 | 3, and in lieu of all 1,2,3 it’s possible to write 1 & 2 & 3. Different situations might give rise to using one or the other form, which aids the Perl & Raku philosophy of TIMTOWTDI.

One feature that did not make the cut was the ability to introspect the current states. As late as 2009, it seems it was still planned (based on this issue), but at some point, it was taken out, probably because the way that junctions work means that any methods called on them ought to be fully passed through to their superimposed values, so it would be weird to have a single method that didn’t. Nonetheless, by abusing some of the multithreading that Raku does with junctions, it’s still possible if one really wants to do it:

sub eigenstates(Mu $j) {
    my @states;
    -> Any $s { @states.push: $s }.($j);
    @states;
}

Conclusion

Junctions are, despite their internal complexity and rarity in programming languages are something that are so well thought out and integrated into common Raku coding styles that most use them without any thought. Who hasn’t written a signature with a parameter like $foo where Int|Rat or @bar where .all < 256? Who prefers

if $command eq 'quit' || $command eq 'exit'

to these versions? (because TIMTOWTDI)

if $command eq 'quit'|'exit'
if $command eq any <quit exit>
if $command eq @bye.any

None of these are implemented with syntactical sugar for conditionals, though it may seem otherwise. Instead, at their core, is a junction. Dr. Conway’s RFC 225 is a prime example of a modest proposal that is so simultaneously both crazy and natural that, while it fundamentally changed how we wrote code, we haven’t even realized it.


  1. I am not a physicist, much less a quantum one. I probably made mistakes here. /me is not sorry.
  2. Maybe there’s a super convoluted way to still pull it off, but to my knowledge, he’s the only person who wrote an entire regex to parse Perl itself in order to add a few new keywords, so if he deems it not possible… I’m gonna go with it’s not possible.
  3. Perhaps in the future others could be designed, such as at-least-half. The sky’s the limit after all in Raku.

Cover image by Sharon Hahn Darlin, licensed under CC-BY 2.0

Raku Advent Calendar: RFC 168, by Johan Vromans: Built-in functions should be functions

Published by liztormato on 2020-08-07T01:01:00

Proposed on 27 August 2000, frozen on 20 September 2000, which was a generalization of RFC 26: Named operators versus functions proposed on 4 August 2000, frozen on 28 August 2000, also by Johan Vromans.

Johan’s proposal was to completely obliterate the difference between built-in functions, such as abs, and functions defined by the user. In Perl, abs can be called both as a prefix operator (without parentheses), as well as a function taking a single argument.

You see, Perl has this concept of built-in functions that are slightly different from “normal” subroutines for performance reasons. In Perl, as in Raku, the actual name of a subroutine, is prefixed with an ‘&‘. In Perl, you can take a reference to a subroutine with ‘\‘, but that doesn’t work for built-in functions.

Nowadays, in Raku, the difference between a subroutine taking a single positional argument, and a built-in prefix operator whose name is acceptable as an identifier, is already minimal. Well, actually absent. Suppose we want to define a prefix operator foo that has the same semantics as abs:

sub foo(Numeric:D $value) {
    $value < 0 ?? -$value !! $value
}

say abs -42;  # 42
say foo -42;  # 42

say abs(-42); # 42
say foo(-42); # 42

You can’t really see a difference, now can you? Well, the reason is simple: in Raku, there is no real difference between the foo subroutine, and the abs prefix operator. They’re both just subroutines: just look at the definition of the abs function for Real numbers.

But how does that function for infix operators? Those aren’t surely subroutines as well in Raku? How can they be? Something like “+” is not a valid identifier, so you cannot define a subroutine with it?

The genius in the process from the RFC to the implementation in Raku, has really been the idea to give a subroutine that represents an infix operator, a specially formatted name. In the case of infix + operator, the subroutine is known by the name infix:<+>. And if you look at its definition, you’ll see that it is actually quite simple: the left hand side of the infix operator becomes the first positional argument, and the right hand side the second positional argument. So something like:

say 42 + 666;

is really just syntactic sugar for:

say infix:<+>(42, 666);

Does this apply to all built-in operators in Raku? Well, almost. Some operators, such as ||, or, && and and are short-circuiting. This means that the value on the right hand side, might not be evaluated if the left hand side has a certain value.

A simple example using the say function (which always returns True):

say "foo" or say "bar"; # foo

Because the infix or operator sees that its left hand side is already True, it will not bother to evaluate the right hand side, and thus will not print “bar”. There is currently no way in Raku to mimic this short-circuiting behaviour in “ordinary” subroutines. But this will change when macro’s will finally also become first-class citizens in Raku land. Which is expected to be happening in the coming year as part of Jonathan Worthington‘s work on the RakuAST grant.

Going back to the original RFC, it also mentions:

In particular, it is desired that every built-in
- can be overridden by a user defined subroutine;
- can have a reference taken;
- has a useful prototype.

So, let’s check that those points:

can be overridden by a used defined subroutine

OK, so infix operators have a special name. So what happens if I declare a subroutine with that name? Well, let’s try:

sub infix:<+>(\a, \b) { a + b }
say 42 + 666;

Hmmm… that doesn’t show anything, that just hangs! Well, yeah, because we basically have a case of a subroutine here calling itself without ever returning!

This code example eats about 1GB of memory per second, so don’t do that too long unless you have a lot of memory available!

The easiest fix would be to not use the infix ‘+‘ operator in our version:

sub infix:<+>(\a, \b) { sum a, b }
say 42 + 666;  # 708

But what if we want to refer to original infix:<+> logic? It’s just a subroutine after all! But where does that subroutine live? Well, in the core of course! And for looking up things in the core, you use the CORE:: PseudoStash:

sub infix:<+>(\a, \b) {
    say "plussing";
    CORE::<&infix:<+>>(a, b)
}
say 42 + 666; # plussing\n708

You look in the CORE:: pseudostash for the full name of the infix operator: CORE::<&infix:<+>> will then give you the subroutine object of the core’s infix + operator, and you can call that as a subroutine with two parameters.

So that part of the RFC has been implemented!

can have a reference taken

For the infix + operator, that would be &infix:<+>, as basically is shown in the example above. You could actually store that in a variable, and use that later in an expression:

my $foo = &infix:<+>;
say $foo(42,666);  # 708

Note that contrary to Perl, you do not need to take a reference in Raku. Since everything in Raku is an object, &foo and &infix:<+> are just objects as well. You can just use them as they are. So literally this part of the RFC could never be implemented because Raku does not have reference. But for the use case, which is obtaining something that can be called, the RFC has also been implemented.

has a useful prototype

Perl’s prototypes basically morphed into Raku’s Signatures. But that’s at least one blog post all by itself. So for now, we just say that the “prototypes” of Perl in 2000 turned into signatures in Raku. And since you can ask for a subroutine’s signature:

sub foo(\a, \b) { }
say &foo.signature;  # (\a, \b)

You can also do that for the infix + operator:

say &infix:<+>.signature;  # ($?, $?, *%)

Hmmm… that looks different? Well, yes, it does a bit, but what is mainly different is that both positional parameters are optional. And that any named parameters will also be accepted. As to why that is, that’s really the topic of a yet another blog post about meta-operators. Which we’ll also leave for another time.

Conclusion

RFC’s 168 and 26 have been implemented completely, although maybe not in the way the original RFC’s envisioned. In a way that nowadays just feels very natural. Which allows us to build further, on the shoulders of giants!

Rakudo Weekly News: 2020.31 TwentyTwenty

Published by liztormato on 2020-08-03T13:17:05

JJ Merelo kicked off the special 20-day Advent Blog cycle in honour of the publication of the first RFC that would lay the foundation for the Raku Programming Language as we now know it. After that, 3 blog posts got already published:

Looking forward to another 17 to come! If you’re interested in writing one, there are still some open slots available!

Ecosystem Grant Proposal

Tony O’Dell has submitted a grant proposal to redesign the Raku / zef ecosystem and to make it easier to submit your distribution to the ecosystem. Please leave any thoughts you have for this very interesting proposal, so that the Grants Committee can make a better founded decision (/r/rakulang comments).

Raku Steering Council Election

Thanks to feedback from the community, the nomination for candidates and the actual election, have been moved back a month, to 6 September 2020 (nomination) and 20 September 2020 (election). This should give people more time to consider nominating themselves and to allow the word of the election to spread further. Vadim Belman blogged about it. See the (adapted) announcement.

Making Github Actions easier

Shoichi Kaji blogged about the new setup-raku for Github Actions distribution. A cool way of setting up your own Continuous Integration testing of your Raku modules on Github!

Another Five!

Wenzel P. P. Peppmeyer has written five blogs posts again this week:

Docker Demystified

JJ Merelo found out that a new book called “Docker Demystified” by Gavin L. Rebeiro actually uses Raku containers and their applications as examples. Pretty cool to see Raku mainstreaming!

A fork of values

Leon Timmermans wrote an op-ed about the looming schism in the Perl community, with quite a few comments on Hacker News and /r/perl, with quite a few of them touching on Raku.

Weekly Challenge

The entries for Challenge #71 that have Raku solutions:

Andrew Shitov reviewed Weekly Challenge #67, Raku answers for #70 challenge #1 and a video one for challenge #2 of week 70, and did another issue of “Pearls of Raku”: unit sub MAIN and command line, round and precision (/r/rakulang comments)

Weekly Challenge #72 is up for your perusal!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

Again quite a few blog posts and some cool new modules! Yours truly keeps repeating: don’t forget to stay healthy and to stay safe. See you next week!

gfldex: Dropin replacement

Published by gfldex on 2020-08-01T12:10:25

Today I learned that whereis can take multiple commands to look for in $PATH.

$ whereis not-there raku-test-all zef
not-there:
raku-test-all: /home/dex/bin/raku-test-all
zef: /usr/local/src/rakudo/install/share/perl6/site/bin/zef

The results are always in the requested order. So we can use a shell spell to find a candidate to be used as our tester in a Makefile.

TESTER := $(shell whereis raku-test-all zef | cut -d ' ' -f 2 -s | head -n 1)

install-deps:
        zef --depsonly install .

test: install-deps
        $(TESTER) --verbose test .

install:
        zef install .

all: test

push: test
        git push

I had to do a few changes to raku-test-all so it mimics the interface of zef. The idea behind this Makefile is to be able to hit F6 and have all tests run and then push to github. Our current ecosystem simply distributes links to github repos. As a result pushing without testing can lead to somebody else cloning a broken repo (given the timing is bad enough). As you might imagine any speed up to testing is very welcome. Travis is quite slow. I think I should work in that area next.

gfldex: Wrapping Exceptions

Published by gfldex on 2020-07-31T22:02:07

As stated before I would like to reduce simple error handling in Shell::Piping. The following is just way to long for putting a single line into a terminal.

CATCH {
    when X::Shell::CommandNotFound {
        when .cmd ~~ ‚commonmarker‘ {
            put ‚Please install commonmarker with `gem install commonmarker`.‘;
            exit 2;
        }
        default {
            .rethrow;
        }
    }
}

We have a conditional in line 3 and a print statement in line 4. The rest is boilderplate. The shortest syntax I came up with looks like the following.

X::Shell::CommandNotFound.refine({.cmd eq ‚commonmarker‘}, {„Please install ⟨{.cmd}⟩ in ⟨{.path}⟩.“});
X::Shell::CommandNotFound.refine({.cmd eq ‚raku‘}, {„Please run `apt install rakudo`.“});

I want to modify the type object behind CommandNotFound because the module will create instances of them and throw them before I can intercept them. Of cause I could catch them. But the whole excercise is done to get rid of the CATCH block. Adding a method to any exception both inside a module, and thus at compile time, and at runtime would be nice. We can do so with a trait.

multi sub trait_mod:<is>(Exception:U $ex, :$refineable) {
    $ex.HOW.add_method($ex, ‚refine‘, my method (&cond, &message) {
        $ex.WHO::<@refinements>.append: &cond, &message;
        state $wrapped;
        $ex.HOW.find_method($ex, ‚message‘).wrap(my method {
            my @refinements := self.WHO::<@refinements>;
            for @refinements -> &cond, &message {
                if cond(self) {
                    return message(self);
                }
            }
            nextsame
        }) unless $wrapped;
        $wrapped = True;
    });
}

A trait is a fancy sub that takes type objects or other stat-ish (Raku is a dynamic language without static things) objects and does stuff with it. That can be wrapping – a MOP operation in disguise – or actual MOP-calls. This redefining trait adds a class attribute named @refinements via autovification on the package-part of the type. This is hidden behind the pseudo method .WHO. Then it adds a method that takes a pair of Callables and stores them in @refinements. The type object’s original .message method is wrapped unless already done so. This is controlled by a state variable.

The wrapped .message will call the first callable and expects a Bool in return. If that one is True is will call the 2nd callable. Both are called with an exception instance.

The trait does not .^compose the type object it is called on. This is (for now) needed because of the following case.

class X::Shell::CommandNotFound is Exception is refineable { ... }

The trait is actually called on Exception. That works because the added method .refine is not shadowed by anything. We can therefore use this method to wait for it to be called to finish what we need to do after compile time. The wrapper has a fall through via nextsame if no condition has matched.

I don’t think I got all corner cases covered yet. That will need further testing. The user facing syntax can stay the way it is because traits are deboilerplaters. The trait itself is likely going to get more complex. But that is fine. The burden is supposed to be on the side of the implementer.

gfldex: Concurrent dogfood

Published by gfldex on 2020-07-31T19:03:29

I’m using zef --verbose test . in a Makefile triggered by the F2 key to run test in Raku projects. While zef is very nice it’s also very slow not fast yet. It could be a good bit faster if it could run tests in parallel. Since I split up tests in individual files running them concurrently isn’t all that hard. The biggest hurdle is to collect all the outputs and show them without mixing up lines. With Shell::Piping that is very easy indeed.

constant NL = $?NL;
my &RED = { "\e[31m$_\e[0m" };
my &BOLD = { "\e[1m$_\e[0m" };

&RED = &BOLD = { $_ } unless $*OUT.t;

sub run-test(IO::Path $file where { .e & .f }) {
    my @out;
    my @err;
    my $failed;
    px«raku -Ilib $file» |» @out :stderr(@err) :done({$failed = .exitcodes.so});

    („Testing: {$file}“.&BOLD, @out, @err».&RED).flat.join(NL);
}

I shell out to raku and let it run a single test file. The streams of STDOUT and STDERR end up in Arrays. These are then merged in the right order with some colouring for good measure. Now I have to get list of files and run run-test in parallel.

.put for dir(‚t/‘).grep(*.extension eq ‚t‘).sort.hyper(:batch(1), :degree(12)).map(*.&run-test);

The outputs are .put out in the right order thanks to the magic of .hyper. With a single raku process the tests need 11.3s. With 12 threads it’s down to 3s. I shall change the binding to F2 in vim at once!

The whole script can be found here and Shell::Piping here. The latter will land in the ecosystem shortly.

gfldex: Dogfood time!

Published by gfldex on 2020-07-31T15:44:34

Shell::Piping has now all the features I had on my list. So it is time to use it. I still need to work on documentation in the form of README.md. It would be nice to render it to HTML without pushing to github. As it turns out there is a ruby gem that takes a fancy Markdown file and outputs a HTML-fragment. I already had a script that embeds HTML into an HTML page with some effort to make it printable.

#! /usr/bin/env raku

use v6;

put q:to/EOH/;
<html>
  <head>
    <style>
      body {
        margin: auto; margin-top: 4em; max-width: 80em;
      }

      @media print {
        @page { margin: 4em; }
        p { break-inside: avoid; break-before: avoid; }
        h3 { break-after: avoid; break-before: avoid; }
        h2 { break-after: avoid-page; break-before: auto; }
      }
    </style>
  </head>
  <body>
EOH
put $*ARGFILES.slurp;
put ‚  </body>‘;
put ‚</html>‘;

Since this script is taking input from STDIN by the virtue of $*ARGFILES it lends itself to be part of a unix pipe. Starting such a pipe by hand is way to much work. Writing to README.md creates all the data needed to decide that and how to create a README.html.

#! /usr/bin/env raku

use v6;
use Shell::Piping;

react whenever Supply.merge(Promise.in(0).Supply, ‚.‘.IO.watch) {
    say "something changed";
    for dir(‚.‘).grep(*.ends-with('.md')) {
        my $src = .IO;
        my $dst = $src.extension(‚html‘);
        if $src.modified > (try $dst.modified // 0) {
            my @html;
#`[MARK]    px«commonmarker $src» |» px<html-container> |» $dst;
            say ‚spurted!‘;
        }
    }
}

The MARKed line saves me about a dozen lines of code not counting error handling. Most errors will produce error messages by having exceptions thrown by Shell::Pipe. Since we can’t easily have dependencies across language borders, it would be nice to remind my future self what is needed.

CATCH {
    when X::Shell::CommandNotFound {
        when .cmd ~~ ‚commonmarker‘ {
            put ‚Please install commonmarker with `gem install commonmarker`.‘;
            exit 2;
        }
        default {
            .rethrow;
        }
    }
}

I’m specialising an exception so to speak. There is the X::Shell::CommandNotFound type object and a conditional. If that conditional is not met we use the already provided exception. Otherwise we replace the message with a better one. I believe that is a patten worth thinking about. There may be more boilerplate to remove.

vrurg: The Raku Steering Council Election

Published by Vadim Belman on 2020-07-30T00:00:00

I’m not the blogging kind of person and usually don’t post without a good reason. For a long while even a good reason wasn’t enough for me to write something. But things are changing, and this subject I should have mentioned earlier.

We’re currently in the process of forming The Raku Steering Council which is considered as a potentially effective governance model for the language and the community. It’s aimed at taking off load from the shoulders of Jonathan Worthington who currently bears the biggest responsibility for the vast majority of problems the community and the language development encounter.

The biggest advantages of the Council as I see them are:

Disclaimer: everything stated above is my personal view of the situation which is to be sumed up as: the damn good thing is happening!

To the point! The Council is not an elite closed club. Anybody can nominate himself! Just read the election announce.

BTW, the announce currently states the Aug 2 is the last date to nominate. This is about to change to Sep 6. Still, don’t procrastinate too much, let the community know about your nomination and yourself!

Rakudo Weekly News: 2020.30 Almost On Time

Published by liztormato on 2020-07-27T20:47:11

Alexander Kiryuhin announced the Rakudo 2020.07 Compiler Release just a few days after the targeted date! The delay was caused by some build breakage introduced just days before the release, which needed to be fixed first. The associated binary packages are available at the expected locations.

But that was not the only release they announced this week: there’s also a new (free) Community release of the Comma IDE, the IDE of choice for Raku! Which brings multi-module project support, among many other goodies.

It was Twenty Years Ago Today

On August 1st, it was 20 years ago that the first Perl 6 RFC was published. To celebrate that, JJ Merelo is organizing a special blog series of 20 articles, each covering an RFC and how that RFC ended up being implemented in current day Raku. This is your chance to find out about the history of Raku, and maybe get a better understanding as to why, in Raku, things are the way they are! Either as a blog writer or as a reader!

RakuAST Grant Report

Shortly before going on vacation, Jonathan Worthington has written up a report on their work on the RakuAST grant in July (/r/rakulang comments).

Call for Grant Proposals

Only a few days left to submit your Raku Grant proposals for the July round!

Cleaner code!

Wim Vanderbauwheide has written an extensive article about how programming in a functional style can make your code cleaner: Cleaner code with functional programming. With examples in both Raku and Python, highlighting the similarities and the differences between languages. A must read if you don’t know what functional programming is, or would like to have a refresher.

Introducing Podlite

Alexandr Zahatski has written a nice introductory blog post about Podlite an open-source pod6 markup language editor (/r/rakulang comments). Cool stuff!

Another three!

Wenzel P. P. Peppmeyer has written three blogs posts again this week:

Compiling on OpenBSD

Dante Catalfamo has written a blog about their experiences compiling the most recent Rakudo Star on OpenBSD. Although evidently this was not painless, it does have a Hollywood ending (/r/rakulang comments).

For Love of the Underdog

John Longwalker blogs about being able to legitimately say “I did that before it was cool”. How their research into APL and Eiffel, and the use of Raku, has given them the self-realization that blogging about this journey would be a good thing (/r/rakulang comments). Looking forward to future posts!

Raku Pearls

Andrew Shitov has written three blog posts about cool coding practices that they found while reviewing Raku solutions to past Weekly Challenges:

Looking forward to more issues!

Weekly Challenge

The entries for Challenge #70 that have Raku solutions:

Andrew Shitov has started reviewing Raku answers to past weeks challenges: check out the videos for week #66 and week #69. And Weekly Challenge #71 is up for your perusal!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

A week with a flurry of releases and quite a few blog posts and some cool new modules, a good trend!

Do you want to have a say in the direction the Raku Programming Language is moving? Then make yourself available as a potential Raku Steering Council member. Want to know more? Check out the proposed Raku Governance Model!

Finally, again, yours truly keeps repeating: don’t forget to stay healthy and to stay safe. See you next week!

Rakudo Weekly News: 2020.29 Election Time

Published by liztormato on 2020-07-20T19:58:46

A group of Raku community members have come together to support the election of a Raku Steering Council using a Raku Governance Model (which is modelled after the Python Governance Model). If you have a commit bit in any of the Rakudo, NQP or MoarVM repositories, you will have active and passive voting rights for this election even if you haven’t been active for years. Please read the announcement for more information (/r/rakulang comments).

An issue of 17 months

JJ Merelo finally closed the checklist for documenting 6.d features after 17 months of hard work of all documenters, and specifically of JJ Merelo! Kudos! Just reading the list of documenting new features of 6.d makes one realize how much has happened since the first release in December 2015!

Advancing Raku

Vadim Belman continued with two more instalments of their Advanced Raku For Beginners blog series:

More than just Challenges

Andrew Shitov wrote two blog posts this week, inspired by the Weekly Challenge:

Holding configuration data

Patrick Spek blogged about Config 3.0, a generic class to hold… configuration data, which adds support for environment variables for specifying a configuration. Which should make life easier for developers.

More RakuOps!

Alexey Melezhik has published issue number 2 of RakuOps, showing how you can use the Raku Programming Language in daily DevOps tasks.

No more boilerplate!

Wenzel P. P. Peppmeyer shows ways to deboilerplate your code.

NEXSS Programmer

It appears that the Raku Programming Language is now one of the languages supported on NEXSS Programmer. Although it’s not clear to yours truly yet how that has come about exactly.

Weekly Challenge

The entries for Challenge #69 that have Raku solutions:

Challenge #70 is up for your perusal!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

A bit of a quiet week in Raku Programming Language world, in preparation of another Compiler Release. Some sad, but not unexpected news: no workshop in Switzerland this year. Fortunately, a nice crop of repeat bloggers. Finally, in these interesting times, don’t forget to stay healthy and to stay safe. See you next week!

vrurg: Metamodel Introduction Article. Operators

Published by Vadim Belman on 2020-07-18T06:51:00

I’m publishing the next article from ARFB series. This time rather short one, much like a warm up prior to the main workout.

But I’d like to devote this post to another subject. It’s too small for an article yet still worth special note. It was again inspired by one more post from Wenzel P.P. Peppmeyer. Actually, I knew there going to be a new post from him when found an error report in Rakudo repository. And this is the subject of the report which made me write the post.

In the report Wenzel claims that the following code results in incorrect Rakudo behaviour:

class C { };
my \px = C.new;
sub postcircumfix:«< >»(C $c is raw) {
    dd $c;
}
px<ls -l>;

And that either the operator redefinition must work or the error message he gets is less than awesome:

===SORRY!=== Error while compiling /home/dex/projects/raku/lib/raku-shell-piping/px.raku
Missing required term after infix
at /home/dex/projects/raku/lib/raku-shell-piping/px.raku:9
------> px<ls -l>⏏;
    expecting any of:
        prefix
        term

Before I tell why things are happening as intended here, let me notice two problems with the code itself which I copied over as-is since it doesn’t work anyway. First, the postcircumfix sub must be a multi and in Wenzel’s post it is done correctly. Second, it must receive two arguments: first is the object it is applied to, second is what is enclosed into the angle brakets.

So, why won’t it work as one might expect? In Raku there is a class of syntax constructs which look like operators but in fact they’re syntax sugars. There may be different reasons why is it done so. For example, the assignment operator = is done this way to achieve better performance. < > makes what is inclosed inside of it a string or a list of strings. Because of this it belongs to the same category, as quotes "", for example. Therefore, it can only be implemented properly as a syntax construct. When we try to redefine it we break the compiler’s parsing and instead of a postcircumfix it finds a pair of less than and greater than operators. Because the latter doesn’t have rhs statement hence the error we see.

And you know, it was really useful to make this post as I realized that closing of the tickat was preliminary and that such compiler behavior is still incorrect because the attempt to redefine the op should prbably not result in bad parsing.

Rakudo Weekly News: 2020.28 Bridges 7

Published by liztormato on 2020-07-13T14:05:37

Arne Sommer, inspired by a solution of a previous Weekly Challenge, wrote a small series of blog posts about the seven bridges of Königsberg:

Recommended if you’re into mathematical problems and/or graph theory!

Impressed with Raku

A post by faiface on /r/ProgrammingLanguages made it to the top 20 of highest upvotes ever. And a nice post it is! Inspired by a professor describing Raku as “an interesting language with a bunch of cool design choices”. Good stuff!

June Report

Jonathan Worthington reported on their work on the new unified dispatch mechanism in June for the Raku Development Grant. Great progress being made, and a treasure of internals information (/r/rakulang comments)!

Tangling Code

Brian Wisti got themselves into a tangle. Good thing there are Raku regexes! Read all about it in Tangling code from Hugo content in Raku!

Announcement Recovered

Andrew Shitov went through their archives and unearthed an audio-only recording of Larry Wall’s announcement of the release of Perl 6 (/r/rakulang, Twitter comments)!

Multiple Hosts

Alexey Melezhik has published a how-to provision dynamic hosts with Sparrowdo and Sparky.

Indicating absence

In response to a question on StackOverflow, Wenzel P. P. Peppmeyer elaborates on ways to indicate absence of values.

Weekly Challenge

The entries for Challenge #68 that have Raku solutions:

Challenge #69 is up for your perusal!

Core Developments

Questions about Raku

Meanwhile on Twitter

Meanwhile on perl6-users

Comments about Raku

New Raku Modules

Updated Raku Modules

Winding down

A bit of a quiet week in the world of the Raku Programming Language. Still some nice blog posts. In these interesting times, don’t forget to keep healthy and to keep safe. See you next week!

vrurg: A New Article

Published by Vadim Belman on 2020-07-13T10:01:00

A new article of Advanced Raku For Beginners series is published now. With a really surprising subject this time! It is about how we define Raku. Or, in other words: how one knows that his code is Raku? Why a compiler has the right to state that it is actually compiling Raku? And a few side concepts to provide grounds for the main topic.

It may seem to be a bit late. But keeping in mind that the article refers back to a few concepts from previous publications, it’s probably right the time for it!

Enjoy and don’t forget to correct me whenever necessary!

vrurg: Nil As No Data

Published by Vadim Belman on 2020-07-13T09:34:00

In response to a Wenzel P. P. Peppmeyer post I have a thing to add.

The problem of indicating the “no more data” condition is something I dealt with for a couple of times too. While I do like the last solution from Wenzel post, the one with the Failure, but my approach was different. Nil appeals to me as more appropriate for the task in some cases. For example, while developing Concurrent::PChannel I considered it this way:

The channel must be capable of transferring any user data. There must be no exception for what passes through, no false end-of-data markers. Therefore if poll method returns a such a marker it has to be something indistinctively unique to the module.

My solution, in a simplified form, looked like this:

unit module Concurrent::PChannel;
role NoData is export { }
method poll {
    ...;
    $fount ?? $packet !! (Nil but NoData)
}

Simple and straightforward! Because NoData role belongs to the module it guarantees the uniqueness of the return value because the module itself would never mangle with the data injected into the channel. Nor will it push anything into it. Anyone trying to use Concurrent::PChannel::NoData for anything else but testing method poll return value is aware of the consequences. And even though I myself do not test for falseness of poll return value and only check if the role is applied:

if $poll-returned-packed ~~ Concurrent::PChannel::NoData {
    ...
}

But from the boolean point of view Nil but NoData is a False and therefore conforms to another condition in Wenzel’s post.

vrurg: New Articles Redaction

Published by Vadim Belman on 2020-07-06T00:00:00

In the Everything Is An Object. MOP. article I made a shameful error when stated that a type object and its meta-object are the same things. Oh, my… Had to urgently rewrite big chunks of the text.

My deepest apologies for all this! Enjoy the new corrected version!

p6steve: perl7 vs. raku: Sibling Rivalry?

Published by p6steve on 2020-06-27T11:24:01

It was an emotional moment to see the keynote talk at TPRCiC from Sawyer X announcing that perl 7.00 === 5.32. Elation because of the ability of the hardcore perl community to finally break free of the frustrating perl6 roadblock. Pleasure in seeing how the risky decision to rename perl6 to raku has paid off and hopefully is beginning to defuse the tensions between the two rival communities. And Fear that improvements to perl7 will undermine the reasons for many to try out raku and may cannibalise raku usage. (Kudos to Sawyer to recognising that usage is an important design goal).

Then the left side of my brain kicked in. Raku took 15 years of total commitment of genius linguists to ingest 361 RFCs and then synthesise a new kind of programming language. If perl7 seeks the same level of completeness and perfection as raku, surely that will take the same amount of effort. And I do not see the perl community going for the same level of breaking changes that raku did. (OK maybe they could steal some stuff from raku to speed things up…)

And that brought me to Sadness. To reflect that perl Osborned sometime around 2005. That broke the community in two – let’s say the visionaries and the practical-cats. And it drove a mass emigration to Python. Ancient history.

So now we have two sister languages, and each will find a niche in the programming ecosystem via a process of Darwinism. They both inherit the traits (https://en.wikipedia.org/wiki/Perl#Design) that made perl great in the first place….

The design of Perl can be understood as a response to three broad trends in the computer industry: falling hardware costs, rising labor costs, and improvements in compiler technology. Many earlier computer languages, such as Fortran and C, aimed to make efficient use of expensive computer hardware. In contrast, Perl was designed so that computer programmers could write programs more quickly and easily.

Perl has many features that ease the task of the programmer at the expense of greater CPU and memory requirements. These include automatic memory management; dynamic typing; strings, lists, and hashes; regular expressions; introspection; and an eval() function. Perl follows the theory of “no built-in limits,” an idea similar to the Zero One Infinity rule.

Wall was trained as a linguist, and the design of Perl is very much informed by linguistic principles. Examples include Huffman coding(common constructions should be short), good end-weighting (the important information should come first), and a large collection of language primitives. Perl favours language constructs that are concise and natural for humans to write.

Perl’s syntax reflects the idea that “things that are different should look different.” For example, scalars, arrays, and hashes have different leading sigils. Array indices and hash keys use different kinds of braces. Strings and regular expressions have different standard delimiters. This approach can be contrasted with a language such as Lisp, where the same basic syntax, composed of simple and universal symbolic expressions, is used for all purposes.

Perl does not enforce any particular programming paradigm (proceduralobject-orientedfunctional, or others) or even require the programmer to choose among them.

But perl7 and raku serve distinct interests & needs:

Thingperl7raku
compilationstatic parserone pass compiler
compile speedsuper fastrelies on pre-c0mp
executioninterpretedvirtual machine
execution speedsuper fastrelies on jit
module libraryCPAN nativeCPAN import
closuresyesyes
OO philosophyCor not modulepervasive
OO inheritanceRoles + IsRoles + Is + multiple
method invocation->.
type checkingnogradual
sigilsidiosyncratic consistent
referencesmanualautomatic
unicodefeature guardcore
signaturesfeature guardcore
lazy executionnopecore
Junctionsnopecore
Rat mathnopecore
Sets & Mixesnopecore
Complex mathnopecore
Grammarsnopecore
mutabilitynopecore
concurrencynopecore
variable scope“notched”cleaner
operatorsC-likecleaner (e.g. for ->)
switchnogather/when
regexenclassiccleaner
evalyesshell
AST macroshuh?
…and so on

A long list and perhaps a little harsh on perl since many things may be got from CPAN – but when you use raku in anger, you do see the benefit if having a large core language. Only when I made this table, did I truly realise just what a comprehensive language raku is, and that perl will genuinely be the lean and mean option.

Ariel Atom 3.5 review, price, specs and video | Evo
perl7
Model X | Tesla
raku

And, lest we forget our strengths:

When I first saw Python code, I thought that using indents to define the scope seemed like a good idea. However, there’s a huge downside. Deep nesting is permitted, but lines can get so wide that they wrap lines in the text editor. Long functions and long conditional actions may make it hard to match the start to the end. And I pity anyone who miscounts spaces and accidentally puts in three spaces instead of four somewhere — this can take hours to debug and track down. [Source: here]

p6steve: Raku Objects: Confusing or What?

Published by p6steve on 2020-05-07T21:51:52

Chapter 1: The Convenience Seeker

Coming from Python, the Raku object model is recognizable, but brings a tad more structure:

Screenshot 2020-05-07 22.36.37

What works for me, as a convenience seeker, is:

These are the things you want if you are writing in a more procedural or functional style and using class as a means to define a record type.

Chapter 2: The Control Freak

Here’s the rub…

When we describe OO, terms like “encapsulation” and “data hiding” often come up. The key idea here is that the state model inside the object – that is, the way it chooses to represent the data it needs in order to implement its behaviours (the methods) – is free to evolve, for example to handle new requirements. The more complex the object, the more liberating this becomes.

However, getters and setters are methods that have an implicit connection with the state. While we might claim we’re achieving data hiding because we’re calling a method, not accessing state directly, my experience is that we quickly end up at a place where outside code is making sequences of setter calls to achieve an operation – which is a form of the feature envy anti-pattern. And if we’re doing that, it’s pretty certain we’ll end up with logic outside of the object that does a mix of getter and setter operations to achieve an operation. Really, these operations should have been exposed as methods with a names that describes what is being achieved. This becomes even more important if we’re in a concurrent setting; a well-designed object is often fairly easy to protect at the method boundary.

(source jnthn https://stackoverflow.com/questions/59671027/mixing-private-and-public-attributes-and-accessors-in-raku)

Let’s fix that:

Screenshot 2020-05-07 22.38.41
Now, I had to do a bit more lifting, but here’s what I got:

And, in contrast to Chapter 1:

Chapter 3: Who Got the Colon in the End?

I also discovered Larry’s First Law of Language Redesign: Everyone wants the colon

Apocalypse 1: The Ugly, the Bad, and the Good https://www.perl.com/pub/2001/04/02/wall.html/

I conclude that Larry’s decision was to confer the colon on the method syntax,  subtly tilting the balance towards the strict model: by preferring $p.y: 3 over $p.y = 2.

p6steve: Raku vs. Perl – save 70%

Published by p6steve on 2020-04-17T17:36:39

Having hit rock bottom with an ‘I can’t understand my own code sufficiently enough to extend/maintain it’, I have been on a journey to review the perl5 Physics::Unit design and to use this to cut through my self made mess of raku Physics::Unit version 0.0.2.

Now I bring the perspective of a couple of years of regular raku coding to bear, so I am hoping that the bastard child of mature perl5 and raku version one will surpass both in the spirit of David Bowie’s “Pretty Things”.

One of the reasons I chose Physics::Units as a project was that, on the face of it, it seemed to have an aspect that could be approached by raku Grammars – helping me learn them. Here’s a sample of the perl5 version:

Screenshot 2020-04-17 18.40.05

Yes – a recursive descent parser written from scratch in perl5 – pay dirt! There are 215 source code lines dedicated to the parse function. 5 more screens like this…

So I took out my newly sharpened raku tools and here’s my entire port: 

Screenshot 2020-04-17 18.42.08

Instead of ranging over 215 lines, raku has refined this down to a total of 58 lines (incl. the 11 blank ones I kept in for readability) – that’s a space saving of over 70%. Partly removal of parser boilerplate code, partly the raku Grammar constructs and partly an increased focus on the program logic as opposed to the mechanism.

For my coding style, this represents a greater than a two-thirds improvement – by getting the whole parser onto a single screen, I find that I can get the whole problem into my brain’s working memory and avoid burning cycles scrolling up and down to pin down butterflies bugs.

Attentive students will have noted that the Grammar / code integration provides a very natural paradigm for loading real-world data into an OO system, the UnitAction class starts with a stub object and populates ‘has’ attributes as it goes.

Oh and the raku code does a whole lot more such as support for unicode superscripts (up to +/-4), type assignment and type checking, offsets (such as 0 K = 273.15 °C), wider tolerance of user input and so on. Most importantly Real values are kept as Rats as much as possible which helps greatly for example, when round tripping 38.5 °C to  °F and back it is still equals 38.5 °C!

One final remark – use Grammar::Tracer is a fantastic debugging tool for finding and fixing the subtle bugs that can come in and contributing to quickly getting to the optimum solution.

rakudo.org: Rakudo Star Release 2020.01

Published on 2020-02-24T00:00:00

p6steve: Raku: the firkin challenge

Published by p6steve on 2020-01-20T22:40:01

For anyone wondering where my occasional blog on raku has been for a couple of months – sorry. I have been busy wrestling with, and losing to, the first released version of my Physics::Measure module.

Of course, this is all a bit overshadowed by the name change from perl6 to raku. I was skeptical on this, but did not have a strong opinion either way. So kudos to the folks who thrashed this out and I am looking forward to a naissance. For now, I have decided to keep my nickname ‘p6steve’ – I enjoy the resonance between P6 and P–sics and that is my niche. No offence intended to either camp.

My stated aim (blogs passim) is to create a set of physical units that makes sense for high school education. To me, inspired by the perl5 Physics::Unit module, that means not just core SI units for science class, but also old style units like sea miles, furlongs/fortnight and so on for geography and even history. As I started to roll out v0.0.3 of raku Physics::Unit, I thought it would be worthwhile to track a real-world high school education resource, namely OpenStax CNX. As I came upon this passage, I had to take the firkin challenge on:

While there are numerous types of units that we are all familiar with, there are others that are much more obscure. For example, a firkin is a unit of volume that was once used to measure beer. One firkin equals about 34 liters. To learn more about nonstandard units, use a dictionary or encyclopedia to research different “weights and measures.” Take note of any unusual units, such as a barleycorn, that are not listed in the text. Think about how the unit is defined and state its relationship to SI units.

Disaster – I went back to the code for Physics::Unit and, blow me, could I figure out how to drop in a new Unit: the firkin??…. nope!! Why not? Well Physics::Unit v:0.0.3 was impenetrable even to me, the author. Statistically it has 638 lines of code alongside 380 lines of heredoc data. Practically, while it passes all the tests 100%, it is not a practical, maintainable code base.

How did we get here? Well I plead guilty to being an average perl5 coder who really loves the expressivity that Larry offers … but a newbie to raku. I wanted to take on Physics::Measure to learn raku. Finally, I have started to get raku – but it has taken me a couple of years to get to this point!

My best step now – bin the code. I have dumped my original effort, gone back to the original perl5 Physics::Unit module source and transposed it to raku. The result: 296 lines of tight code alongside the same 380 lines of heredoc – a reduction of 53%! And a new found respect for the design skill of my perl5 forbears.

I am aiming to release as v0.0.7 in April 2020.

Jo Christian Oterhals: By the way, you could replace … * with Inf or the unicode infinity symbol ∞ to make it more…

Published by Jo Christian Oterhals on 2019-11-24T19:25:11

By the way, you could replace … * with Inf or the unicode infinity symbol ∞ to make it more readable, i.e.

my @a = 1, 1, * + * … ∞;

— — or — —

my @a = 1, 1, * + * … Inf;

Jo Christian Oterhals: As I understand this, * + * … * means the following:

Published by Jo Christian Oterhals on 2019-11-24T10:20:11

As I understand this, * + * … * means the following:

First— * + * sums the two previous elements in the list. … * tells this to do this an infinite number of times; i.e.

1, 1, (1 + 1)

1, 1, 2, (1 + 2)

1, 1, 2, 3, (2 + 3)

1, 1, 2, 3, 5, (3 + 5)

1, 1, 2, 3, 5, 8, (5 + 8), etc.

… three dots means that it does it lazy, i.e. that it does not generate an element before you call it. This can be good for large lists that are computationally heavy.

p6steve: Atomic Units?

Published by p6steve on 2019-09-17T20:49:39

One of the most exciting parts of blogging about and hacking on perl6* is that there’s a community out there and there’s (always) more than one way to do it!

For Physics::Measure I have been working on a design for a ‘nano-slang’ that can provide a shortcut for the usual new Class declaration… quite long winded for my target audience of physics undergraduates and high school students.

#Instances the usual way

my Unit $u .=new(name => ‘m’, unitsof => ‘Distance’); #Unit m

my Distance $a .=new(value => 10, units => $u);           #Distance 10 m

So, duly inspired by Rakudo perl6 going atomic ⚛ applying unicode to some threading constructs, I started with the notion of the ‘libra’ operator ♎ as shorthand to declare and load Measure instances.

#Introducing the libra operator ♎ as shorthand to declare and load

my $b ♎ ’50 m’;   #Distance 50 m

$b ♎ ‘3 yards’;     #Distance 3 yards

As you can see, the gap created between ♎ and ; is a slang zone that can consume strings and numeric literals. Here’s something a bit more elaborate:

#Normalization with the .norm method

my $p ♎ ’27 kg m^2 / s^3′;   #Power 27 kg m^2 / s^3

$p .= norm;                              #Power 27 W

A few design ideas drew me in this direction:

# Resistance

[‘Ω’, ‘Ohm:s’,],       ’kg m^2 / A^2 s^3′,

[‘kilohm:s’,],          ’kilo Ohm’,

[‘megohm:s’,],       ’mega Ohm’,

HOWEVER!!

Others have proposed a much more direct approach to generate and combine Measure objects – by the use of a simple postfix syntax – thank you!

Something like:

say 500g; # –> Weight.new(grams => 500, prefix => “”)

say 2kg;  # –> Weight.new(grams => 2000, prefix => “kg”)

Watch this space! Or even better zef Physics::Measure and give it a try…

~p6steve

* soon to be Rakudo?!

Jo Christian Oterhals: You’re right.

Published by Jo Christian Oterhals on 2019-08-25T18:51:13

You’re right. I’ll let the article stand as it is and reflect my ignorance as it was when I wrote it :-)

Jo Christian Oterhals: Perl 6 small stuff #21: it’s a date! …or: learn from an overly complex solution to a simple task

Published by Jo Christian Oterhals on 2019-07-31T13:23:17

Perl 6 small stuff #21: it’s a date! …or: learn from an overly complex solution to a simple task

This week’s Perl Weekly Challenge (#19) has two tasks. The first is to find all months with five weekends in the years from 1900 through 2019. The second is to program an implementation of word wrap using the greedy algorithm.

Both are pretty straight-forward tasks, and the solutions to them can (and should) be as well. This time, however, I’m also going to do the opposite and incrementally turn the easy solution into an unnecessarily complex one. Because in this particular case we can learn more by doing things the unnecessarily hard way. So this post will take a look at Dates and date manipulation in Perl 6, using PWC #19 task 1 as an example:

Write a script to display months from the year 1900 to 2019 where you find 5 weekends i.e. 5 Friday, 5 Saturday and 5 Sunday.

Let’s start by finding five-weekend months the easy way:

#!/usr/bin/env perl6
say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1 }, do 1900..2019 X 1,3,5,7,8,10,12;

The algorithm for figuring this out is simple. Given the prerequisite that there must be five occurrences of not only Saturday and Sunday but also Friday, you A) *must* have 31 days to cram five weekends into. And when you know that you’ll also see that B) the last day of the month MUST be a Sunday and C) the first day of the month MUST be a Friday (you don’t have to check for both; if A is true and B is true, C is automatically true too).

The code above implements B and employs a few tricks. You read it from right to left (unless you write it from left to right, like this… say do 1900..2019 X 1,3,5,7,8,10,12 ==> map { Date.new: |$_, 1 } ==> grep *.day-of-week == 5 ==> join “\n”; )

Using the X operator I create a cross product of all the years in the range 1900–2019 and the months 1, 3, 5, 7, 8, 10, 12 (31-day months). In return I get a sequence containing all year-month pairs of the period.

The map function iterates through the Seq. There it instantiates a Date object. A little song and dance is necessary: As Date.new takes three unnamed integer parameters, year, month and day, I have to do something to what I have — a Pair with year and month. I therefore use the | operator to “explode” the pair into two integer parameters for year and month.

You can always use this for calling a sub routine with fixed parameters, using an array with parameter values rather than having separate variables for each parameter. The code below exemplifies usage:

my @list = 1, 2, 3;
sub explode-parameters($one, $two, $three) { 
…do something…
}
#traditional call 
explode-parameters(@list[0], @list[1], @list[2]);
# …or using | 
explode-parameters(|@list);

Back to the business at hand — the .grep filters out the months where the 1st is a Friday, and those are our 5 weekend months. So the output of the one-liner above looks something like this:

...
1997-08-01
1998-05-01
1999-01-01
...

This is a solution as good as any, and if a solution was all we wanted, we could have stopped here. But using this task as an example I want to explore ways to utilise the Date class. Example: The one-liner above does the job, but strictly speaking it doesn’t output the months but the first day of those months. Correcting this is easy, because the Date class supports something called formatters and use the sprintf syntax. To do this you utilise the named parameter “formatter” when instantiating the object.

say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1, formatter => { sprintf "%04d/%02d", .year, .month } }, do 1900..2019 X 1,3,5,7,8,10,12;

Every time a routine pulls a stringified version of the date, the formatter object is invoked. In our case the output has been changed to…

...
1997/08
1998/05
1999/01
...

Formatters are powerful. Look into them.

Now to the overly complex solution. This is the unthinking programmer’s solution, as we don’t suppose anything. The program isn’t told that 5 weekend months only can occur on 31 day months. It doesn’t know that the 1st of such months must be a Friday. All it knows is that if the last day of the month is not Sunday, it figures out the date of the last Sunday (this is not very relevant when counting three-day weekends, but could be if you want to find Saturday+Sunday weekends, or only Sundays).

#!/usr/bin/env perl6
my $format-it = sub ($self) {
sprintf "%04d month %02d", .year, .month given $self;
}
sub MAIN(Int :$from-year = 1900, Int :$to-year where * > $from-year = 2019, Int :$weekend-length where * ~~ 1..3 = 3) {
my $date-loop = Date.new($from-year, 1, 1, formatter => $format-it);
while ($date-loop.year <= $to-year) {
my $date = $date-loop.later(day => $date-loop.days-in-month);
$date = $date.truncated-to('week').pred if $date.day-of-week != 7;
my @weekend = do for 0..^$weekend-length -> $w {
$date.earlier(day => $w).weekday-of-month;
};
say $date-loop if ([+] @weekend) / @weekend == 5;
$date-loop = $date-loop.later(:1month);
}
}

This code can solve the task both for three day weekends, but also for weekends consisting of Saturday + Sunday, as well as only Sundays. You control that with the command line parameter weekend-length=[1..3].

This code finds the last Sunday of each month and counts whether it has occured five times that month. It does the same for Saturday (if weekend-length=2) and Friday (if weekend-length=3). Like this:

my @weekend = do for 0..^$weekend-length -> $w { 
$date.earlier(day => $w).weekday-of-month;
};

The code then calculcates the average weekday-of-month for these three days like this:

say $date-loop if ([+] @weekend) / @weekend == 5;

This line uses the reduction operator [+] on the @weekend list to find the sum of all elements. That sum is divided by the number of elements. If the result is 5, then you have a five day weekend.

As for fun stuff to do with the Date object:

.later(day|month|year => Int) — adds the given number of time units to the current date. There’s also an earlier method for subtracting.

.days-in-months — tells you how many days there are in the current month of the Date object. The value may be 31, 30, 29 (february, leap year) or 28 (february).

.truncated-to(week|month|day|year) — rolls the date back to the first day of the week, month, day or year.

.weekday-of-month — figures out what day of week the current date is and calculates how many of that day there has been so far in that month.

Apart from this you’ll see that I added the formatter in a different way this time. This is probably cleaner looking and easier to maintain.

In the end this post maybe isn’t about dates and date manipulation at all, but rather is a call for all of us to use the documentation even more. It’s often I think that Perl 6 should have a function for x, y or z — .weekday-of-month is one such example — and the documentation tells me that it actually does!

It’s very easy to pick up Perl 6 and program it as you would have programmed Perl 5 or other languages you know well. But the documentation has lots of info of things you didn’t have before and that will make programming easier and more fun when you’ve learnt about them.

I guess you don’t need and excuse to delve into the docs, but if you do the Perl Weekly Challenge is an excellent excuse for spending time in the docs!

Jo Christian Oterhals: You have several options besides do. You could use parenthesises instead, like this:

Published by Jo Christian Oterhals on 2019-08-01T07:34:44

You have several options besides do. You could use parenthesises instead, like this:

say join "\n", grep *.day-of-week == 5, map { Date.new: |$_, 1 }, (1900..2019 X 1,3,5,7,8,10,12);

In this case I just thought the code looked better without parenthesises, so I chose to use do instead. The docs has a couple of sentences about this option here.

BTW, I chose the form Date.new: blah — the colon variant — instead of Date.new(blah) also because I wanted to avoid parenthesises. This freedom to chose is the essence of Perl’s credo “There’s more than one way to do it” I guess.

Rant: This freedom of choice is the best thing with Perl 6, but it has a downside too. Code can be harder to understand if you’re not familiar with the variants another programmer has made. This is a part of the Perls’s reputation of being write-only languages.

It won’t happen, but personally I’d like to see someone analyse real-world Perl 6 code (public repositories on Github for instance) and figure what forms are used most. The analysis could have been used to clean house — figuring out what forms to keep and which should be removed. Perl 6 would become a smaller — and arguably easier — language while staying just as powerful as it is today.

rakudo.org: Rakudo Star Release 2019.03

Published on 2019-03-30T00:00:00

brrt to the future: Reverse Linear Scan Allocation is probably a good idea

Published by Bart Wiegmans on 2019-03-21T15:52:00

Hi hackers! Today First of all, I want to thank everybody who gave such useful feedback on my last post.  For instance, I found out that the similarity between the expression JIT IR and the Testarossa Trees IR is quite remarkable, and that they have a fix for the problem that is quite different from what I had in mind.

Today I want to write something about register allocation, however. Register allocation is probably not my favorite problem, on account of being both messy and thankless. It is a messy problem because - aside from being NP-hard to solve optimally - hardware instruction sets and software ABI's introduce all sorts of annoying constraints. And it is a thankless problem because the case in which a good register allocator is useful - for instance, when there's lots of intermediate values used over a long stretch of code - are fairly rare. Much more common are the cases in which either there are trivially sufficient registers, or ABI constraints force a spill to memory anyway (e.g. when calling a function, almost all registers can be overwritten).

So, on account of this being not my favorite problem, and also because I promised to implement optimizations in the register allocator, I've been researching if there is a way to do better. And what better place to look than one of the fastest dynamic language implementations arround, LuaJIT? So that's what I did, and this post is about what I learned from that.

Truth be told, LuaJIT is not at all a learners' codebase (and I don't think it's author would claim this). It uses a rather terse style of C and lots and lots of preprocessor macros. I had somewhat gotten used to the style from hacking dynasm though, so that wasn't so bad. What was more surprising is that some of the steps in code generation that are distinct and separate in the MoarVM JIT - instruction selection, register allocation and emitting bytecode - were all blended together in LuaJIT. Over multiple backend architectures, too. And what's more - all these steps were done in reverse order - from the end of the program (trace) to the beginning. Now that's interesting...

I have no intention of combining all phases of code generation like LuaJIT has. But processing the IR in reverse seems to have some interesting properties. To understand why that is, I'll first have to explain how linear scan allocation currently works in MoarVM, and is most commonly described:

  1. First, the live ranges of program values are computed. Like the name indicates, these represent the range of the program code in which a value is both defined and may be used. Note that for the purpose of register allocation, the notion of a value shifts somewhat. In the expression DAG IR, a value is the result of a single computation. But for the purposes of register allocation, a value includes all its copies, as well as values computed from different conditional branches. This is necessary because when we actually start allocating registers, we need to know when a value is no longer in use (so we can reuse the register) and how long a value will remain in use -
  2. Because a value may be computed from distinct conditional branches, it is necessary to compute the holes in the live ranges. Holes exists because if a value is defined in both sides of a conditional branch, the range will cover both the earlier (in code order) branch and the later branch - but from the start of the later branch to its definition that value doesn't actually exist. We need this information to prevent the register allocator from trying to spill-and-load a nonexistent value, for instance.
  3. Only then can we allocate and assign the actual registers to instructions. Because we might have to spill values to memory, and because values now can have multiple definitions, this is a somewhat subtle problem. Also, we'll have to resolve all architecture specific register requirements in this step.
In the MoarVM register allocator, there's a fourth step and a fifth step. The fourth step exists to ensure that instructions conform to x86 two-operand form (Rather than return the result of an instruction in a third register, x86 reuses one of the input registers as the output register. E.g. all operators are of the form a = op(a, b)  rather than a = op(b, c). This saves on instruction encoding space). The fifth step inserts instructions that are introduced by the third step; this is done so that each instruction has a fixed address in the stream while the stream is being processed.

Altogether this is quite a bit of complexity and work, even for what is arguably the simplest correct global register allocation algorithm. So when I started thinking of the reverse linear scan algorithm employed by LuaJIT, the advantages became clear:
There are downsides as well, of course. Not knowing exactly how long a value will be live while processing it may cause the algorithm to make worse choices in which values to spill. But I don't think that's really a great concern, since figuring out the best possible value is practically impossible anyway, and the most commonly cited heuristic - evict the value that is live furthest in the future, because this will release a register over a longer range of code, reducing the chance that we'll need to evict again - is still available. (After all, we do always know the last use, even if we don't necessarily know the first definition).

Altogether, I'm quite excited about this algorithm; I think it will be a real simplification over the current implementation. Whether that will work out remains to be seen of course. I'll let you know!

brrt to the future: Something about IR optimization

Published by Bart Wiegmans on 2019-03-17T06:23:00

Hi hackers! Today I want to write about optimizing IR in the MoarVM JIT, and also a little bit about IR design itself.

One of the (major) design goals for the expression JIT was to have the ability to optimize code over the boundaries of individual MoarVM instructions. To enable this, the expression JIT first expands each VM instruction into a graph of lower-level operators. Optimization then means pattern-matching those graphs and replacing them with more efficient expressions.

As a running example, consider the idx operator. This operator takes two inputs (base and element) and a constant parameter scale and computes base+element*scale. This represents one of the operands of an  'indexed load' instruction on x86, typically used to process arrays. Such instructions allow one instruction to be used for what would otherwise be two operations (computing an address and loading a value). However, if the element of the idx operator is a constant, we can replace it instead with the addr instruction, which just adds a constant to a pointer. This is an improvement over idx because we no longer need to load the value of element into a register. This saves both an instruction and valuable register space.

Unfortunately this optimization introduces a bug. (Or, depending on your point of view, brings an existing bug out into the open). The expression JIT code generation process selects instructions for subtrees (tile) of the graph in a bottom-up fashion. These instructions represent the value computed or work performed by that subgraph. (For instance, a tree like (load (addr ? 8) 8) becomes mov ?, qword [?+8]; the question marks are filled in during register allocation). Because an instruction is always represents a tree, and because the graph is an arbitrary directed acyclic graph, the code generator projects that graph as a tree by visiting each operator node only once. So each value is computed once, and that computed value is reused by all later references.

It is worth going into some detail into why the expression graph is not a tree. Aside from transformations that might be introduced by optimizations (e.g. common subexpression elimination), a template may introduce a value that has multiple references via the let: pseudo-operator. See for instance the following (simplified) template:

(let: (($foo (load (local))))
    (add $foo (sub $foo (const 1))))

Both ADD and SUB refer to the same LOAD node


In this case, both references to $foo point directly to the same load operator. Thus, the graph is not a tree. Another case in which this occurs is during linking of templates into the graph. The output of an instruction is used, if possible, directly as the input for another instruction. (This is the primary way that the expression JIT can get rid of unnecessary memory operations). But there can be multiple instructions that use a value, in which case an operator can have multiple references. Finally, instruction operands are inserted by the compiler and these can have multiple references as well.

If each operator is visited only once during code generation, then this may introduce a problem when combined with another feature - conditional expressions. For instance, if two branches of a conditional expression both refer to the same value (represented by name $foo) then the code generator will only emit code to compute its value when it encounters the first reference. When the code generator encounters $foo for the second time in the other branch, no code will be emitted. This means that in the second branch, $foo will effectively have no defined value (because the code in the first branch is never executed), and wrong values or memory corruption is then the predictable result.

This bug has always existed for as long as the expression JIT has been under development, and in the past the solution has been not to write templates which have this problem. This is made a little easier by a feature the let: operator, in that it inserts a do operator which orders the values that are declared to be computed before the code that references them. So that this is in fact non-buggy:

(let: (($foo (load (local))) # code to compute $foo is emitted here
  (if (...)  
    (add $foo (const 1)) # $foo is just a reference
    (sub $foo (const 2)) # and here as well

The DO node is inserted for the LET operator. It ensures that the value of the LOAD node is computed before the reference in either branch


Alternatively, if a value $foo is used in the condition of the if operator, you can also be sure that it is available in both sides of the condition.

All these methods rely on the programmer being able to predict when a value will be first referenced and hence evaluated. An optimizer breaks this by design. This means that if I want the JIT optimizer to be successful, my options are:

  1. Fix the optimizer so as to not remove references that are critical for the correctness of the program
  2. Modify the input tree so that such references are either copied or moved forward
  3. Fix the code generator to emit code for a value, if it determines that an earlier reference is not available from the current block.
In other words, I first need to decide where this bug really belongs - in the optimizer, the code generator, or even the IR structure itself. The weakness of the expression IR is that expressions don't really impose a particular order. (This is unlike the spesh IR, which is instruction-based, and in which every instruction has a 'previous' and 'next' pointer). Thus, there really isn't a 'first' reference to a value, before the code generator introduces the concept. This is property is in fact quite handy for optimization (for instance, we can evaluate operands in whatever order is best, rather than being fixed by the input order) - so I'd really like to preserve it. But it also means that the property we're interested in - a value is computed before it is used in, in all possible code flow paths - isn't really expressible by the IR. And there is no obvious local invariant that can be maintained to ensure that this bug does not happen, so any correctness check may have to check the entire graph, which is quite impractical.

I hope this post explains why this is such a tricky problem! I have some ideas for how to get out of this, but I'll reserve those for a later post, since this one has gotten quite long enough. Until next time!

brrt to the future: A short post about types and polymorphism

Published by Bart Wiegmans on 2019-01-14T13:34:00

Hi all. I usually write somewhat long-winded posts, but today I'm going to try and make an exception. Today I want to talk about the expression template language used to map the high-level MoarVM instructions to low-level constructs that the JIT compiler can easily work with:

This 'language' was designed back in 2015 subject to three constraints:
Recently I've been working on adding support for floating point operations, and  this means working on the type system of the expression language. Because floating point instructions operate on a distinct set of registers from integer instructions, a floating point operator is not interchangeable with an integer (or pointer) operator.

This type system is enforced in two ways. First, by the template compiler, which attempts to check if you've used all operands correctly. This operates during development, which is convenient. Second, by instruction selection, as there will simply not be any instructions available that have the wrong combinations of types. Unfortunately, that happens at runtime, and such errors so annoying to debug that it motivated the development of the first type checker.

However, this presents two problems. One of the advantages of the expression IR is that, by virtue of having a small number of operators, it is fairly easy to analyze. Having a distinct set of operators for each type would undo that. But more importantly, there are several MoarVM instructions that are generic, i.e. that operate on integer, floating point, and pointer values. (For example, the set, getlex and bindlex instructions are generic in this way). This makes it impossible to know whether its values will be integers, pointers, or floats.

This is no problem for the interpreter since it can treat values as bags-of-bits (i.e., it can simply copy the union MVMRegister type that holds all values of all supported types). But the expression JIT works differently - it assumes that it can place any value in a register, and that it can reorder and potentially skip storing them to memory. (This saves work when the value would soon be overwritten anyway). So we need to know what register class that is, and we need to have the correct operators to manipulate a value in the right register class.

To summarize, the problem is:
There are two ways around this, and I chose to use both. First, we know as a fact for each local or lexical value in a MoarVM frame (subroutine) what type it should have. So even a generic operator like set can be resolved to a specific type at runtime, at which point we can select the correct operators. Second, we can introduce generic operators of our own. This is possible so long as we can select the correct instruction for an operator based on the types of the operands.

For instance, the store operator takes two operands, an address and a value. Depending on the type of the value (reg or num), we can always select the correct instruction (mov or movsd). It is however not possible to select different instructions for the load operator based on the type required, because instruction selection works from the bottom up. So we need a special load_num operator, but a store_num operator is not necessary. And this is true for a lot more operators than I had initially thought. For instance, aside from the (naturally generic) do and if operators, all arithmetic operators and comparison operators are generic in this way.

I realize that, despite my best efforts, this has become a rather long-winded post anyway.....

Anyway. For the next week, I'll be taking a slight detour, and I aim to generalize the two-operand form conversion that is necessary on x86. I'll try to write a blog about it as well, and maybe it'll be short and to the point. See you later!

brrt to the future: New years post

Published by Bart Wiegmans on 2019-01-06T13:15:00

Hi everybody! I recently read jnthns Perl 6 new years resolutions post, and I realized that this was an excellent example to emulate. So here I will attempt to share what I've been doing in 2018 and what I'll be doing in 2019.

In 2018, aside from the usual refactoring, bugfixing and the like:
So 2019 starts with me trying to complete the goals specified in that grant request. I've already partially completed one goal (as explained in the interim report) - ensuring that register encoding works correctly for SSE registers in DynASM. Next up is actually ensuring support for SSE (and floating point) registers in the JIT, which is surprisingly tricky, because it means introducing a type system where there wasn't really one previously. I will have more to report on that in the near future.

After that, the first thing on my list is the support for irregular register requirements in the register allocator, which should open up the possibility of supporting various instructions.

I guess that's all for now. Speak you later!

6guts: My Perl 6 wishes for 2019

Published by jnthnwrthngtn on 2019-01-02T01:35:51

This evening, I enjoyed the New Year’s fireworks display over the beautiful Prague skyline. Well, the bit of it that was between me and the fireworks, anyway. Rather than having its fireworks display at midnight, Prague puts it at 6pm on New Year’s Day. That makes it easy for families to go to, which is rather thoughtful. It’s also, for those of us with plans to dig back into work the following day, a nice end to the festive break.

Prague fireworks over Narodni Divadlo

So, tomorrow I’ll be digging back into work, which of late has involved a lot of Perl 6. Having spent so many years working on Perl 6 compiler and runtime design and implementation, it’s been fun to spend a good amount of 2018 using Perl 6 for commercial projects. I’m hopeful that will continue into 2019. Of course, I’ll be continuing to work on plenty of Perl 6 things that are for the good of all Perl 6 users too. In this post, I’d like to share some of the things I’m hoping to work on or achieve during 2019.

Partial Escape Analysis and related optimizations in MoarVM

The MoarVM specializer learned plenty of new tricks this year, delivering some nice speedups for many Perl 6 programs. Many of my performance improvement hopes for 2019 center around escape analysis and optimizations stemming from it.

The idea is to analyze object allocations, and find pieces of the program where we can fully understand all of the references that exist to the object. The points at which we can cease to do that is where an object escapes. In the best cases, an object never escapes; in other cases, there are a number of reads and writes performed to its attributes up until its escape.

Armed with this, we can perform scalar replacement, which involves placing the attributes of the object into local registers up until the escape point, if any. As well as reducing memory operations, this means we can often prove significantly more program properties, allowing further optimization (such as getting rid of dynamic type checks). In some cases, we might never need to allocate the object at all; this should be a big win for Perl 6, which by its design creates lots of short-lived objects.

There will be various code-generation and static optimizer improvements to be done in Rakudo in support of this work also, which should result in its own set of speedups.

Expect to hear plenty about this in my posts here in the year ahead.

Decreasing startup time and base memory use

The current Rakudo startup time is quite high. I’d really like to see it fall to around half of what it currently is during 2019. I’ve got some concrete ideas on how that can be achieved, including changing the way we store and deserialize NFAs used by the parser, and perhaps also dealing with the way we currently handle method caches to have less startup impact.

Both of these should also decrease the base memory use, which is also a good bit higher than I wish.

Improving compilation times

Some folks – myself included – are developing increasingly large applications in Perl 6. For the current major project I’m working on, runtime performance is not an issue by now, but I certainly feel myself waiting a bit on compiles. Part of it is parse performance, and I’d like to look at that; in doing so, I’d also be able to speed up handling of all Perl 6 grammars.

I suspect there are some good wins to be had elsewhere in the compilation pipeline too, and the startup time improvements described above should also help, especially when we pre-compile deep dependency trees. I’d also like to look into if we can do some speculative parallel compilation.

Research into concurrency safety

In Perl 6.d, we got non-blocking await and react support, which greatly improved the scalability of Perl 6 concurrent and parallel programs. Now many thousands of outstanding tasks can be juggled across just a handful of threads (the exact number chosen according to demand and CPU count).

For Perl 6.e, which is still a good way off, I’d like to having something to offer in terms of making Perl 6 concurrent and parallel programming safer. While we have a number of higher-level constructs that eliminate various ways to make mistakes, it’s still possible to get into trouble and have races when using them.

So, I plan to spend some time this year quietly exploring and prototyping in this space. Obviously, I want something that fits in with the Perl 6 language design, and that catches real and interesting bugs – probably by making things that are liable to occasionally explode in weird ways instead reliably do so in helpful ways, such that they show up reliably in tests.

Get Cro to its 1.0 release

In the 16 months since I revealed it, Cro has become a popular choice for implementing HTTP APIs and web applications in Perl 6. It has also attracted code contributions from a couple of dozen contributors. This year, I aim to see Cro through to its 1.0 release. That will include (to save you following the roadmap link):

Comma Community, and lots of improvements and features

I founded Comma IDE in order to bring Perl 6 a powerful Integrated Development Environment. We’ve come a long way since the Minimum Viable Product we shipped back in June to the first subscribers to the Comma Supporter Program. In recent months, I’ve used Comma almost daily on my various Perl 6 projects, and by this point honestly wouldn’t want to be without it. Like Cro, I built Comma because it’s a product I wanted to use, which I think is a good place to be in when building any product.

In a few months time, we expect to start offering Comma Community and Comma Complete. The former will be free of charge, and the latter a commercial offering under a subscribe-for-updates model (just like how the supporter program has worked so far). My own Comma wishlist is lengthy enough to keep us busy for a lot more than the next year, and that’s before considering things Comma users are asking for. Expect plenty of exciting new features, as well as ongoing tweaks to make each release feel that little bit nicer to use.

Speaking, conferences, workshops, etc.

This year will see me giving my first keynote at a European Perl Conference. I’m looking forward to being in Riga again; it’s a lovely city to wander around, and I remember having some pretty nice food there too. The keynote will focus on the concurrent and parallel aspects of Perl 6; thankfully, I’ve still a good six months to figure out exactly what angle I wish to take on that, having spoken on the topic many times before!

I also plan to submit a talk or two for the German Perl Workshop, and will probably find the Swiss Perl Workshop hard to resist attending once more. And, more locally, I’d like to try and track down other Perl folks here in Prague, and see if I can help some kind of Praha.pm to happen again.

I need to keep my travel down to sensible levels, but might be able to fit in the odd other bit of speaking during the year, if it’s not too far away.

Teaching

While I want to spend most of my time building stuff rather than talking about it, I’m up for the occasional bit of teaching. I’m considering pitching a 1-day Perl 6 concurrency workshop to the Riga organizers. Then we’ll see if there’s enough folks interested in taking it. It’ll cost something, but probably much less than any other way of getting a day of teaching from me. :-)

So, down to work!

Well, a good night’s sleep first. :-) But tomorrow, another year of fun begins. I’m looking forward to it, and to working alongside many wonderful folks in the Perl community.

rakudo.org: Rakudo Star Release 2018.10

Published on 2018-11-11T00:00:00

Perlgeek.de: Perl 6 Coding Contest 2019: Seeking Task Makers

Published by Moritz Lenz on 2018-11-10T23:00:01

I want to revive Carl Mäsak's Coding Contest as a crowd-sourced contest.

The contest will be in four phases:

For the first phase, development of tasks, I am looking for volunteers who come up with coding tasks collaboratively. Sadly, these volunteers, including myself, will be excluded from participating in the second phase.

I am looking for tasks that ...

This is non-trivial, so I'd like to have others to discuss things with, and to come up with some more tasks.

If you want to help with task creation, please send an email to moritz.lenz@gmail.com, stating your intentions to help, and your freenode IRC handle (optional).

There are other ways to help too:

In these cases you can use the same email address to contact me, or use IRC (moritz on freenode) or twitter.

Zoffix Znet: Perl 6 Advent Calendar 2018 Call for Authors

Published on 2018-10-31T00:00:00

Write a blog post about Perl 6

Zoffix Znet: A Request to Larry Wall to Create a Language Name Alias for Perl 6

Published on 2018-10-07T00:00:00

The culmination of the naming discussion

6guts: Speeding up object creation

Published by jnthnwrthngtn on 2018-10-06T22:59:11

Recently, a Perl 6 object creation benchmark result did the rounds on social media. This Perl 6 code:

class Point {
    has $.x;
    has $.y;
}
my $total = 0;
for ^1_000_000 {
    my $p = Point.new(x => 2, y => 3);
    $total = $total + $p.x + $p.y;
}
say $total;

Now (on HEAD builds of Rakudo and MoarVM) runs faster than this roughly equivalent Perl 5 code:

use v5.10;

package Point;

sub new {
    my ($class, %args) = @_;
    bless \%args, $class;
}

sub x {
    my $self = shift;
    $self->{x}
}

sub y {
    my $self = shift;
    $self->{y}
}

package main;

my $total = 0;
for (1..1_000_000) {
    my $p = Point->new(x => 2, y => 3);
    $total = $total + $p->x + $p->y;
}
say $total;

(Aside: yes, I know there’s no shortage of libraries in Perl 5 that make OO nicer than this, though those I tried also made it slower.)

This is a fairly encouraging result: object creation, method calls, and attribute access are the operational bread and butter of OO programming, so it’s a pleasing sign that Perl 6 on Rakudo/MoarVM is getting increasingly speedy at them. In fact, it’s probably a bit better at them than this benchmark’s raw numbers show, since:

While dealing with Int values got faster recently, it’s still really making two Int objects every time around that loop and having to GC them later. An upcoming new set of analyses and optimizations will let us get rid of that cost too. And yes, startup will get faster with time also. In summary, while Rakudo/MoarVM is now winning that benchmark against Perl 5, there’s still lots more to be had. Which is a good job, since the equivalent Python and Ruby versions of that benchmark still run faster than the Perl 6 one.

In this post, I’ll look at the changes that allowed this benchmark to end up faster. None of the new work was particularly ground-breaking; in fact, it was mostly a case of doing small things to make better use of optimizations we already had.

What happens during construction?

Theoretically, the default new method in turn calls bless, passing the named arguments along. The bless method then creates an object instance, followed by calling BUILDALL. The BUILDALL method goes through the set of steps needed for constructing the object. In the case of a simple object like ours, that involves checking if an x and y named argument were passed, and if so assigning those values into the Scalar containers of the object attributes.

For those keeping count, that’s at least 3 method calls (newbless, and BUILDALL).

However, there’s a cheat. If bless wasn’t overridden (which would be an odd thing to do anyway), then the default new could just call BUILDALL directly anyway. Therefore, new looks like this:

multi method new(*%attrinit) {
    nqp::if(
      nqp::eqaddr(
        (my $bless := nqp::findmethod(self,'bless')),
        nqp::findmethod(Mu,'bless')
      ),
      nqp::create(self).BUILDALL(Empty, %attrinit),
      $bless(|%attrinit)
    )
}

The BUILDALL method was originally a little “interpreter” that went through a per-object build plan stating what needs to be done. However, for quite some time now we’ve instead compiled a per-class BUILDPLAN method.

Slurpy sadness

To figure out how to speed this up, I took a look at how the specializer was handling the code. The answer: not so well. There were certainly some positive moments in there. Of note:

However, the new method was getting only a “certain” specialization, which is usually only done for very heavily polymorphic code. That wasn’t the case here; this program clearly constructs overwhelmingly one type of object. So what was going on?

In order to produce an “observed type” specialization – the more powerful kind – it needs to have data on the types of all of the passed arguments. And for the named arguments, that was missing. But why?

Logging of passed argument types is done on the callee side, since:

The argument logging was done as the interpreter executed each parameter processing instruction. However, when there’s a slurpy, then it would just swallow up all the remaining arguments without logging type information. Thus the information about the argument types was missing and we ended up with a less powerful form of specialization.

Teaching the slurpy handling code about argument logging felt a bit involved, but then I realized there was a far simpler solution: log all of the things in the argument buffer at the point an unspecialized frame is being invoked. We’re already logging the entry to the call frame at that point anyway. This meant that all of the parameter handling instructions got a bit simpler too, since they no longer had logging to do.

Conditional elimination

Having new being specialized in a more significant way was an immediate win. Of note, this part:

      nqp::eqaddr(
        (my $bless := nqp::findmethod(self,'bless')),
        nqp::findmethod(Mu,'bless')
      ),

Was quite interesting. Since we were now specializing on the type of self, then the findmethod could be resolved into a constant. The resolution of a method on the constant Mu was also a constant. Therefore, the result of the eqaddr (same memory address) comparison of two constants should also have been turned into a constant…except that wasn’t happening! This time, it was simple: we just hadn’t implemented folding of that yet. So, that was an easy win, and once done meant the optimizer could see that the if would always go a certain way and thus optimize the whole thing into the chosen branch. Thus the new method was specialized into something like:

multi method new(*%attrinit) {
    nqp::create(self).BUILDALL(Empty, %attrinit),
}

Further, the create could be optimized into a “fast create” op, and the relatively small BUILDALL method could be inlined into new. Not bad.

Generating simpler code

At this point, things were much better, but still not quite there. I took a look at the BUILDALL method compilation, and realized that it could emit faster code.

The %attrinit is a Perl 6 Hash object, which is for the most part a wrapper around the lower-level VM hash object, which is the actual hash storage. We were, curiously, already pulling this lower-level hash out of the Hash object and using the nqp::existskey primitive to check if there was a value, but were not doing that for actually accessing the value. Instead, an .AT-KEY('x') method call was being done. While that was being handled fairly well – inlined and so forth – it also does its own bit of existence checking. I realized we could just use the nqp::atkey primitive here instead.

Later on, I also realized that we could do away with nqp::existskey and just use nqp::atkey. Since a VM-level null is something that never exists in Perl 6, we can safely use it as a sentinel to mean that there is no value. That got us down to a single hash lookup.

By this point, we were just about winning the benchmark, but only by a few percent. Were there any more easy pickings?

An off-by-one

My next surprise was that the new method didn’t get inlined. It was within the size limit. There was nothing preventing it. What was going on?

Looking closer, it was even worse than that. Normally, when something is too big to be inlined, but we can work out what specialization will be called on the target, we do “specialization linking”, indicating which specialization of the code to call. That wasn’t happening. But why?

Some debugging later, I sheepishly fixed an off-by-one in the code that looks through a multi-dispatch cache to see if a particular candidate matches the set of argument types we have during optimization of a call instruction. This probably increased the performance of quite a few calls involving passing named arguments to multi-methods – and meant new was also inlined, putting us a good bit ahead on this benchmark.

What next?

The next round of performance improvements – both to this code and plenty more besides – will come from escape analysis, scalar replacement, and related optimizations. I’ve already started on that work, though expect it will keep me busy for quite a while. I will, however, be able to deliver it in stages, and am optimistic I’ll have the first stage of it ready to talk about – and maybe even merge – in a week or so.

brrt to the future: A future for fork(2)

Published by Bart Wiegmans on 2018-10-03T22:18:00

Hi hackers. Today I want to write about a new functionality that I've been developing for MoarVM that has very little to do with the JIT compiler. But it is still about VM internals so I guess it will fit.

Many months ago, jnthn wrote a blog post on the relation between perl 5 and perl 6. And as a longtime and enthusiastic perl 5 user - most of the JIT's compile time support software is written in perl 5 for a reason - I wholeheartedly agree with the 'sister language' narrative. There is plenty of room for all sorts of perls yet, I hope. Yet one thing kept itching me:
Moreover, it’s very much the case that Perl 5 and Perl 6 make different trade-offs. To pick one concrete example, Perl 6 makes it easy to run code across multiple threads, and even uses multiple threads internally (for example, performing optimization and JIT compilation on a background thread). Which is great…except the only winning move in a game involving both threads and fork() is not to play. Sometimes one just can’t have their cake and eat it, and if you’re wanting a language that more directly gives you your POSIX, that’s probably always going to be a strength of Perl 5 over Perl 6.
(Emphasis mine). You see, I had never realized that MoarVM couldn't implement fork(), but it's true. In POSIX systems, a fork()'d child process inherits the full memory space, as-is, from its parent process. But it inherits only the forking thread. This means that any operations performed by any other thread, including operations that might need to be protected by a mutex (e.g. malloc()), will be interrupted and unfinished (in the child process). This can be a problem. Or, in the words of the linux manual page on the subject:

       *  The child process is created with a single thread—the one that
called fork(). The entire virtual address space of the parent is
replicated in the child, including the states of mutexes,
condition variables, and other pthreads objects; the use of
pthread_atfork(3) may be helpful for dealing with problems that
this can cause.

* After a fork() in a multithreaded program, the child can safely
call only async-signal-safe functions (see signal-safety(7)) until
such time as it calls execve(2).

Note that the set of signal-safe functions is not that large, and excludes all memory management functions. As noted by jnthn, MoarVM is inherently multithreaded from startup, and will happily spawn as many threads as needed by the program. It also uses malloc() and friends rather a lot. So it would seem that perl 6 cannot implement POSIX fork() (in which both parent and child program continue from the same position in the program) as well as perl 5 can.

I was disappointed. As a longtime (and enthusiastic) perl 5 user, fork() is one of my favorite concurrency tools. Its best feature is that parent and child processes are isolated by the operating system, so each can modify its own internal state without concern for concurrency,. This can make it practical to introduce concurrency after development, rather than designing it in from the start. Either process can crash while the other continues. It is also possible (and common) to run a child process with reduced privileges relative to the parent process, which isn't possible with threads. And it is possible to prepare arbitrarily complex state for the child process, unlike spawn() and similar calls.

Fortunately all hope isn't necessarily lost. The restrictions listed above only apply if there are multiple threads active at the moment that fork() is executed. That means that if we can stop all threads (except for the one planning to fork) before actually calling fork(), then the child process can continue safely. That is well within the realm of possibility, at least as far as system threads are concerned.

The process itself isn't very exciting to talk about, actually - it involves sending stop signals to system threads, waiting for these threads to join, verifying that the forking thread is the really only active thread, and restarting threads after fork(). Because of locking, it is a bit subtle (tbere may be another user thread that is also trying to fork), but not actually very hard. When I finally merged the code, it turned out that an ancient (and forgotten) thread list modification function was corrupting the list by not being aware of generational garbage collection... oops. But that was simple enough to fix (thanks to timotimo++ for the hint).

And now the following oneliner should work on platforms that support fork():(using development versions of MoarVM and Rakudo):

perl6 -e 'use nqp; my $i = nqp::fork(); say $i;'

The main limitation of this work is that it won't work if there are any user threads active. (If there are any, nqp::fork() will throw an exception). The reason why is simple: while it is possible to adapt the system threads so that I can stop them on demand, user threads may be doing arbitrary work, hold arbitrary locks and may be blocked (possibly indefinitely) on a system call. So jnthn's comment at the start of this post still applies - threads and fork() don't work together.

In fact, many programs may be better off with threads. But I think that languages in the perl family should let the user make that decision, rather than the VM. So I hope that this will find some use somewhere. If not, it was certainly fun to figure out. Happy hacking!


PS: For the curious, I think there may in fact be a way to make fork() work under a multithreaded program, and it relies on the fact that MoarVM has a multithreaded garbage collector. Basically, stopping all threads before calling fork() is not so different from stopping the world during the final phase of garbage collection. And so - in theory - it would be possible to hijack the synchronization mechanism of the garbage collector to pause all threads. During interpretation, and in JIT compiled code, each thread periodically checks if garbage collection has started. If it has, it will synchronize with the thread that started GC in order to share the work. Threads that are currently blocked (on a system call, or on acquiring a lock) mark themselves as such, and when they are resumed always check the GC status. So it is in fact possible to force MoarVM into a single active thread even with multiple user threads active. However, that still leaves cleanup to deal with, after returning from fork() in the child process. Also, this will not work if a thread is blocked on NativeCall. Altogether I think abusing the GC in order to enable a fork() may be a bit over the edge of insanity :-)

6guts: Eliminating unrequired guards

Published by jnthnwrthngtn on 2018-09-29T19:59:28

MoarVM’s optimizer can perform speculative optimization. It does this by gathering statistics as the program is interpreted, and then analyzing them to find out what types and callees typically show up at given points in the program. If it spots there is at least a 99% chance of a particular type showing up at a particular program point, then it will optimize the code ahead of that point as if that type would always show up.

Of course, statistics aren’t proofs. What about the 1% case? To handle this, a guard instruction is inserted. This cheaply checks if the type is the expected one, and if it isn’t, falls back to the interpreter. This process is known as deoptimization.

Just how cheap are guards?

I just stated that a guard cheaply checks if the type is the expected one, but just how cheap is it really? There’s both direct and indirect costs.

The direct cost is that of the check. Here’s a (slightly simplified) version of the JIT compiler code that produces the machine code for a type guard.

/* Load object that we should guard */
| mov TMP1, WORK[obj];
/* Get type table we expect and compare it with the object's one */
MVMint16 spesh_idx = guard->ins->operands[2].lit_i16;
| get_spesh_slot TMP2, spesh_idx;
| cmp TMP2, OBJECT:TMP1->st;
| jne >1;
/* We're good, no need to deopt */
| jmp >2;
|1:
/* Call deoptimization handler */
| mov ARG1, TC;
| mov ARG2, guard->deopt_offset;
| mov ARG3, guard->deopt_target;
| callp &MVM_spesh_deopt_one_direct;
/* Jump out to the interpreter */
| jmp ->exit;
|2:

Where get_spesh_slot is a macro like this:

|.macro get_spesh_slot, reg, idx;
| mov reg, TC->cur_frame;
| mov reg, FRAME:reg->effective_spesh_slots;
| mov reg, OBJECTPTR:reg[idx];
|.endmacro

So, in the case that the guard matches, it’s 7 machine instructions (note: it’s actually a couple more because of something I omitted for simplicity). Thus there’s the cost of the time to execute them, plus the space they take in memory and, especially, the instruction cache. Further, one is a conditional jump. We’d expect it to be false most of the time, and so the CPU’s branch predictor should get a good hit rate – but branch predictor usage isn’t entirely free of charge either. Effectively, it’s not that bad, but it’s nice to save the cost if we can.

The indirect costs are much harder to quantify. In order to deoptimize, we need to have enough state to recreate the world as the interpreter expects it to be. I wrote on this topic not so long ago, for those who want to dive into the detail, but the essence of the problem is that we may have to retain some instructions and/or forgo some optimizations so that we are able to successfully deoptimize if needed. Thus, the presence of a guard constrains what optimizations we can perform in the code around it.

Representation problems

A guard instruction in MoarVM originally looked like:

sp_guard r(obj) sslot uint32

Where r(obj) is an object register to read containing the object to guard, the sslot is a spesh slot (an entry in a per-block constant table) containing the type we expect to see, and the uint32 indicates the target address after we deoptimize. Guards are inserted after instructions for which we had gathered statistics and determined there was a stable type. Things guarded include return values after a call, reads of object attributes, and reads of lexical variables.

This design has carried us a long way, however it has a major shortcoming. The program is represented in SSA form. Thus, an invoke followed by a guard might look something like:

invoke r6(5), r4(2)
sp_guard r6(5), sslot(42), litui32(64)

Where r6(5) has the return value written into it (and thus is a new SSA version of r6). We hold facts about a value (if it has a known type, if it has a known value, etc.) per SSA version. So the facts about r6(5) would be that it has a known type – the one that is asserted by the guard.

The invoke itself, however, might be optimized by performing inlining of the callee. In some cases, we might then know the type of result that the inlinee produces – either because there was a guard inside of the inlined code, or because we can actually prove the return type! However, since the facts about r6(5) were those produced by the guard, there was no way to talk about what we know of r6(5) before the guard and after the guard.

More awkwardly, while in the early days of the specializer we only ever put guards immediately after the instructions that read values, more recent additions might insert them at a distance (for example, in speculative call optimizations and around spesh plugins). In this case, we could not safely set facts on the guarded register, because those might lead to wrong optimizations being done prior to the guard.

Changing of the guards

Now a guard instruction looks like this:

sp_guard w(obj) r(obj) sslot uint32

Or, concretely:

invoke r6(5), r4(2)
sp_guard r6(6), r6(5), sslot(42), litui32(64)

That is to say, it introduces a new SSA version. This means that we get a way to talk about the value both before and after the guard instruction. Thus, if we perform an inlining and we know exactly what type it will return, then that type information will flow into the input – in our example, r6(5) – of the guard instruction. We can then notice that the property the guard wants to assert is already upheld, and replace the guard with a simple set (which may itself be eliminated by later optimizations).

This also solves the problem with guards inserted away from the original write of the value: we get a new SSA version beyond the guard point. This in turn leads to more opportunities to avoid repeated guards beyond that point.

Quite a lot of return value guards on common operations simply go away thanks to these changes. For example, in $a + $b, where $a and $b are Int, we will be able to inline the + operator, and we can statically see from its code that it will produce an Int. Thus, the guard on the return type in the caller of the operator can be eliminated. This saves the instructions associated with the guard, and potentially allows for further optimizations to take place since we know we’ll never deoptimize at that point.

In summary

MoarVM does lots of speculative optimization. This enables us to optimize in cases where we can’t prove a property of the program, but statistics tell us that it mostly behaves in a certain way. We make this safe by adding guards, and falling back to the general version of the code in cases where they fail.

However, guards have a cost. By changing our representation of them, so that we model the data coming into the guard and after the guard as two different SSA versions, we are able to eliminate many guard instructions. This not only reduces duplicate guards, but also allows for elimination of guards when the broader view afforded by inlining lets us prove properties that we weren’t previously able to.

In fact, upcoming work on escape analysis and scalar replacement will allow us to start seeing into currently opaque structures, such as Scalar containers. When we are able to do that, then we’ll be able to prove further program properties, leading to the elimination of yet more guards. Thus, this work is not only immediately useful, but also will help us better exploit upcoming optimizations.

6guts: Faster box/unbox and Int operations

Published by jnthnwrthngtn on 2018-09-28T22:43:55

My work on Perl 6 performance continues, thanks to a renewal of my grant from The Perl Foundation. I’m especially focusing on making common basic operations faster, the idea being that if those go faster than programs composed out of them also should. This appears to be working out well: I’ve not been directly trying to make the Text::CSV benchmark run faster, but that’s resulted from my work.

I’ll be writing a few posts in on various of the changes I’ve done. This one will take a look at some related optimizations around boxing, unboxing, and common mathematical operations on Int.

Boxing and unboxing

Boxing is taking a natively typed value and wrapping it into an object. Unboxing is the opposite: taking an object that wraps a native value and getting the native value out of it.

In Perl 6, these are extremely common. Num and Str are boxes around num and str respectively. Thus, unless dealing with natives explicitly, working with floating point numbers and strings will involve lots of box and unbox operations.

There’s nothing particularly special about Num and Str. They are normal objects with the P6opaque representation, meaning they can be mixed into. The only thing that makes them slightly special is that they have attributes marked as being a box target. This indicates the attribute out as being the one to write to or read from in a box or unbox operation.

Thus, a box operations is something like:

While unbox is:

Specialization of box and unbox

box is actually two operations: an allocation and a store. We know how to fast-path allocations and JIT them relatively compactly, however that wasn’t being done for box. So, step one was to decompose this higher-level op into the allocation and the write into the allocated object. The first step could then be optimized in the usual way allocations are.

In the unspecialized path, we first find out where to write the native value to, and then write it. However, when we’re producing a specialized version, we almost always know the type we’re boxing into. Therefore, the object offset to write to can be calculated once, and a very simple instruction to do a write at an offset into the object produced. This JITs extremely well.

There are a couple of other tricks. Binds into a P6opaque generally have to check that the object wasn’t mixed in to, however since we just allocated it then we know that can’t be the case and can skip that check. Also, a string is a garbage-collectable object, and when assigning one GC-able object into another one, we need to perform a write barrier for the sake of generational GC. However, since the object was just allocated, we know very well that it is in the nursery, and so the write barrier will never trigger. Thus, we can omit it.

Unboxing is easier to specialize: just calculate the offset, and emit a simpler instruction to load the value from there.

I’ve also started some early work (quite a long way from merge) on escape analysis, which will allow us to eliminate many box object allocations entirely. It’s a great deal easier to implement this if allocations, reads, and writes to an object have a uniform representation. By lowering box and unbox operations into these lower level operations, this eases the path to implementing escape analysis for them.

What about Int?

Some readers might have wondered why I talked about Num and Str as examples of boxed types, but not Int. It is one too – but there’s a twist. Actually, there’s two twists.

The first is that Int isn’t actually a wrapper around an int, but rather an arbitrary precision integer. When we first implemented Int, we had it always use a big integer library. Of course, this is slow, so later on we made it so any number fitting into a 32-bit range would be stored directly, and only allocate a big integer structure if it’s outside of this range.

Thus, boxing to a big integer means range-checking the value to box. If it fits into the 32-bit range, then we can write it directly, and set the flag indicating that it’s a small Int. Machine code to perform these steps is now spat out directly by the JIT, and we only fall back to a function call in the case where we need a big integer. Once again, the allocation itself is emitted in a more specialized way too, and the offset to write to is determined once at specialization time.

Unboxing is similar. Provided we’re specializing on the type of the object to unbox, then we calculate the offset at specialization time. Then, the JIT produces code to check if the small Int flag is set, and if so just reads and sign extends the value into a 64-bit register. Otherwise, it falls back to the function call to handle the real big integer case.

For boxing, however, there was a second twist: we have a boxed integer cache, so for small integers we don’t have to repeatedly allocate objects boxing them. So boxing an integer is actually:

  1. Check if it’s in the range of the box cache
  2. If so, return it from the cache
  3. Otherwise, do the normal boxing operation

When I first did these changes, I omitted the use of the box cache. It turns out, however, to have quite an impact in some programs: one benchmark I was looking at suffered quite a bit from the missing box cache, since it now had to do a lot more garbage collection runs.

So, I reinstated use of the cache, but this time with the JIT doing the range checks in the produced machine code and reading directly out of the cache in the case of a hit. Thus, in the cache hit case, we now don’t even make a single function call for the box operation.

Faster Int operations

One might wonder why we picked 32-bit integers as the limit for the small case of a big integer, and not 64-bit integers. There’s two reasons. The most immediate is that we can then use the 32 bits that would be the lower 32 of a 64-bit pointer to the big integer structure as our “this is a small integer” flag. This works reliably as pointers are always aligned to at least a 4-byte boundary, so a real pointer to a big integer structure would never have the lowest bits set. (And yes, on big-endian platforms, we swap the order of the flag and the value to account for that!)

The second reason is that there’s no portable way in C to detect if a calculation overflowed. However, out of the basic math operations, if we have two inputs that fit into a 32-bit integer, and we do them at 64-bit width, we know that the result can never overflow the 64-bit integer. Thus we can then range check the result and decide whether to store it back into the result object as 32-bit, or to store it as a big integer.

Since Int is immutable, all operations result in a newly allocated object. This allocation – you’ll spot a pattern by now – is open to being specialized. Once again, finding the boxed value to operate on can also be specialized, by calculating its offset into the input objects and result object. So far, so familiar.

However, there’s a further opportunity for improvement if we are JIT-compiling the operations to machine code: the CPU has flags for if the last operation overflowed, and we can get at them. Thus, for two small Int inputs, we can simply:

  1. Grab the values
  2. Do the calculation at 32-bit width
  3. Check the flags, and store it into the result object if no overflow
  4. If it overflowed, fall back to doing it wider and storing it as a real big integer

I’ve done this for addition, subtraction, and multiplication. Those looking for a MoarVM specializer/JIT task might like to consider doing it for some of the other operations. :-)

In summary

Boxing, unboxing, and math on Int all came with various indirections for the sake of generality (coping with mixins, subclassing, and things like IntStr). However, when we are producing type-specialized code, we can get rid of most of the indirections, resulting in being able to perform them faster. Further, when we JIT-compile the optimized result into machine code, we can take some further opportunities, reducing function calls into C code as well as taking advantage of access to the overflow flags.

samcv: Adding and Improving File Encoding Support in MoarVM

Published on 2018-09-26T00:00:00

Encodings. They may seem to some as horribly boring and bland. Nobody really wants to have to worry about the details. Encodings are how we communicate, transmit data. Their purpose is to be understood by others. When they work, nobody should even know they are there. When they don’t — and everyone can probably remember a time when they have tried to open a poorly encoded or corrupted file and been disappointed — they cannot be ignored.

Here I talk about the work I have done in the past and work still in progress to improve the support of current encodings, add new encodings, and add new features and new options to allow Perl 6 and MoarVM to support more encodings and in a way which better achieves the goals encodings were designed to solve.

Table of Contents

Which Encodings Should We Add?

I started looking at the most common encodings on the Web. We supported the two most common, UTF-8 and ISO-8859-1 but we did not support windows-1251 (Cyrillic) or ShiftJIS. This seemed like the makings of a new project so I got to work.

I decided to start with windows-1251 since it was an 8 bit encoding while Shift-JIS was a variable length one or two byte encoding (the shift in the name comes from the second byte 'shift'). While the encoding itself was simpler than Shift-JIS, I was soon realizing some issues with both windows-1251 and our already supported windows-1252 encoding.

One of the first major issues I found in windows-1252, you could create files which could not be decoded by other programs. An example of this is codepoint 0x81 (129) which does not have a mapping in the windows-1252 specification. This would cause many programs to not detect the file as windows-1252, or to fail saying the file was corrupted and could not be decoded.

While our non-8bit encodings would throw an error if asked to encode text which did not exist in that codemapping, our 8 bit encodings would happily pass through invalid codepoints, as long as they fit in 8 bits.

As I said at the start, encodings are a way for us to communicate, to exchange data and to be understood. This to me indicated a major failing. While the solution could be to never attempt to write codepoints that don’t exist in the target encoding, that was not an elegant solution and would just be a workaround.

To remedy this I had to add new ops, such as decodeconf to replace decode op, and encodeconf to replace encode (plus many others). This would allow us to specify a configuration for our encodings and allow Perl 6 to tell MoarVM if it wanted to encode strictly according to the standard or to be more lenient.

I added a new :strict option to open, encode and a few others to allow you to specify if it should be encoding strict or not strict. In the needs of backwards compatibility it still defaults to leniently encoding and decoding. Strict is planned to become the default for Perl 6 specification 6.e.

Replacements

Some of our other encodings had support for 'replacements'. If you tried to encode something that would not fit in the target encoding, this allowed you to supply a string that could be one or more characters which would but substituted instead of causing MoarVM to throw an exception.

Once I had the strictness nailed down I was able to add support for replacements so one could have the ability to write data in a strict mode while still ensuring all compatible characters would get written properly, and you did not have to choose between unstrict and writing incompatible files and strict and having the encoding or file write failing when you really needed it not to.

Shift-JIS

Encodings are not without difficulty. As with the previous encodings I talked about, Shift-JIS would not be without decisions that had to be made. With Shift-JIS, the question became, "which Shift-JIS?".

You see, there are dozens of different extensions to Shift-JIS, in addition to original Shift-JIS encoding. As a variable length encoding, most of the one byte codepoints had been taken while there were many hundreds of open and unallocated codepoints in the two byte range. This saw the wide proliferation of manufacturer specific extensions which other manufacturers may adopt while adding their own extensions to the format.

Which of the dozens of different Shift-JIS encodings should we support? I eventually decided on windows-932 because that is the standard that is used by browsers when they encounter Shift-JIS text and encompasses a great many symbols. It is the most widely used Shift-JIS format. This would allow us to support the majority of Shift-JIS encoded documents out there, and the one which web browsers use on the web. Most but not all the characters in it are compatible with the other extensions, so it seemed like it was the sensible choice.

The only notable exception to this is that windows-932 is not totally compatible with the original Shift-JIS encoding. While the original windows-932 encoding (and some extensions to it) map the ASCII’s backslash to the yen symbol ¥, windows-932 keeps it to the backslash. This was to retain better compatibility with ASCII and other formats.

UTF-16

While MoarVM had a UTF-16 encoder and decoder, it was not fully featured.

  • You could encode a string into a utf16 buffer:

    • "hello".encode('utf16') #> utf16:0x<68 65 6c 6c 6f>

  • You could decode a utf16 buffer into a string:

    • utf16.new(0x68, 0x65, 0x6C, 0x6C, 0x6F).decode('utf16') #> hello

That was about it. You couldn’t open a filehandle as UTF-16 and write to it. You couldn’t even do $fh.write: "hello".encode('utf16') as the write function did not know how to deal with writing a 16 bit buffer (it expected an 8-bit buffer).

In addition there was no streaming UTF-16 decoder, so there was no way to read a UTF-16 file. So I set out to work, first allowing us to write 16-bit buffers to a file and then being able to .print a filehandle and write text to it.

At this point I knew that I would have to confront the BOM in the room, but decided to first implement the streaming decoder and ensure all of our file handle operations worked.

The BOM in the Room

You may be noticing a theme here. You do not just implement an encoding. It does not exist in a vacuum. While there may be standards, there may be multiple standards. And those standards may or may not be followed in practice or it may not be totally straightforward.

Endianess defines which order the bytes are in a number. Big endian machines will store a 16-bit number with the larger section first, similar to how we write numbers, while little endian machines will put the smallest section first. This only matters for encoding numbers that are more than one byte. UTF-16 can be either big endian or little endian. Meaning big and little endian files will contain the same bytes, but the order is different.

Since swapping the two bytes of a 16-bit integer causes a different integer instead of an invalid one, the creators of UTF-16 knew it was not possible to determine with certainty the endianess of a 16 bit number. And so the byte order mark was created, a codepoint that would be added at the start of a file to signify which endianess the file was. Since they didn’t want to break already existing software, they used a "Zero width no-break space" (ZWNBSP) and designated that a ZWNBSP with bytes reversed would be invalid in UTF-16, a "non-character".

If the BOM was not removed, it would not be visible since it is zero width. If your program opens a file and sees a byte order mark it knew it was in the correct endianess. If it’s the "non-character" then it knew it had to swap the bytes when it read the file.

The standard states that utf16 should always use a BOM and that when reading a file as utf16 the BOM should be used to detect the endianess. It states the BOM is not passed through as it is assumed to not be part of the actual data. If it doesn’t exist then the system is supposed to decode as the endianess that the context suggests. So you get a format where you may lose the first codepoint of your file if that codepoint happens to be a zero width non-break space.

For utf16be (big endian) and utf16le (little endian) the standard states the byte order mark should not exist, and that any zero width no-break space at the start of a file is just that, and should be passed through.

Standards and practice are not identical and in practice many programs will remove a leading zero width non-break space even if set explicitly to utf16be or utf16le. I was unsure which route I should take. I looked at how Perl 5 handled it and after thinking for a long time I decided that we would follow the spec. Even though other software will get rid of any detected byte order mark, I think it’s important that if a user specifies utf16be or utf16le, they will always get back the same codepoints that they write to a file.

Current Work

I am currently working on adding support so that when you write a file in 'utf16' it will add a byte order mark, as utf16 is supposed to always have one. Until this is added you can write a file with Perl 6 on one computer using the 'utf16' encoding which will not decode correctly using the same 'utf16' setting if the two computers are of different endianess.

Since there was no functionality for writing utf-16 to a file previously, there should be no backwards compatibility issues to worry about there and we can set it to add a byte order mark if the utf16 encoder is used.

Release 2018.09

In the last release utf16, utf16le and utf16be should work, aside from utf16 not adding a byte order mark on file write. Anyone who uses utf16 and finds any issues or comments are free to email me or make an issue on the MoarVM or Rakudo issue trackers.

Zoffix Znet: The 100 Day Plan: The Update on Perl 6.d Preparations

Published on 2018-08-09T00:00:00

Info on how 6.d release prep is going

rakudo.org: Rakudo Star Release 2018.06

Published on 2018-08-06T00:00:00

Zoffix Znet: Introducing: Perl 6 Marketing Assets Web App

Published on 2018-08-05T00:00:00

Get your Perl 6 flyers and brochures

Zoffix Znet: Introducing: Newcomer Guide to Contributing to Core Perl 6

Published on 2018-08-02T00:00:00

Info on the new guide for newcomers