Planet Raku

Raku RSS Feeds (individual feeds | subscribe to all via Atom)

Elizabeth Mattijsen (Libera: lizmat #raku) / 2026-04-26T10:25:26


Chatnik: LLM Host in the Shell — Part 1: First Examples & Design Principles

Published by Anton Antonov Antonov on 2026-04-25T15:53:00

Introduction

“Chatnik” is a Raku package that provides Command Line Interface (CLI) scripts for conversing with multiple, persistent Large Language Model (LLM) personas. Files of the host Operating System (OS) are used to maintain persistence.

Most importantly, “Chatnik” does not try to entrench users in its own user experience (loop) for interaction with LLMs. Instead, it brings customizable LLM invocations and conversations into the Unix shell — making them composable, integratable, and scriptable with existing workflows.

In other words, the tag line “LLM Host in the Shell” should be understood as “LLMs, not as an app — but as a Unix shell primitive.”

Here are the most notable “Chatnik” features:

Remark: “Chatnik” closely follows the LLM-chat objects interaction system of the Raku package “Jupyter::Chatbook”, [AAp3].(Using OS shell instead of Jupyter notebooks.)

The rest of this document is organized as follows:


Introductory examples

The examples in this section demonstrate how the CLI scripts llm-chat and llm-chat-meta — provided by “Chatnik” — are used to have multi-turn LLM conversations and compose Unix shell pipelines with LLM interaction messages.

Remark: Instead of llm-chat and llm-chat-meta, the CLI script chatnik can be used: chatnik invokes llm-chat, and chatnik meta invokes llm-chat-meta.

Remark: The prompts used in the examples are provided by the Raku package “LLM::Prompts”, [AAp2]. Since many of the prompts of that package have dedicated pages at the Wolfram Prompt Repository (WPR) the examples use WPR reference links.

Chat with Yoda

Here we create an LLM persona — by naming it and “priming it” with a prompt — and start interacting with it:

llm-chat --chat-id=yoda --prompt=@Yoda 'Hi! Who are you?'

Here we continue the conversation — using the -i synonym of --chat-id and no-quotes message argument:

llm-chat -i=yoda How many students did you have

And continue the discussion some more:

llm-chat -i=yoda 'Which student is the best?'

The example used the LLM persona “Yoda”.
(See more LLM personas here.)

Fortune-echo-limerick pipeline

Here we specify a pipeline for

  1. Getting a fortune
  2. Echoing it
  3. Using the fortune to make a limerick
fortune | tee /dev/tty | llm-chat --prompt="Make a limerick from the given text:"
Space is big. You just won't believe how vastly, hugely, mind-bogglingly
big it is. I mean, you may think it's a long way down the road to the
drug store, but that's just peanuts to space.
-- The Hitchhiker's Guide to the Galaxy
There once was a space vast and wide,
Whose scale no one could quite abide.
Though the drug store seems near,
Space’s size is sincere—
Mind-bogglingly big can’t be denied!

Remark: In the shell command above, llm-chat created (or reused) a chat object with the default identifier “NONE”.

Make a diagram from previous results

Here we use prompt expansion to request the creation of a Mermaid-JS diagram via the
prompt “CodeWriterX”:

llm-chat '!CodeWriterX|"Mermaid-JS code of the concepts"^'
```mermaid
sequenceDiagram
participant User
participant Space
User->>Space: Thinks space is big
Note right of Space: Space is vastly, hugely, mind-bogglingly big
User->>Space: Compares to drug store distance
Note right of Space: Drug store distance is just peanuts to space
```

Since the result is given in Markdown code fences we take the last message via the CLI script llm-meta-chat,
then use sed to remove the first and last lines, and then pass that text to the terminal
Mermaid-JS visualizer mmdflux:

llm-chat-meta last-message | sed '1d; $d' | mmdflux

┌──────┐                           ┌───────┐
│ User │                           │ Space │
└───┬──┘                           └───┬───┘
    │                                  │
    │─Thinks space is big─────────────>│
    │                                  │
    │                                  │ ┌──────────────────────────────────────────────┐
    │                                  │ │ Space is vastly, hugely, mind-bogglingly big │
    │                                  │ └──────────────────────────────────────────────┘
    │                                  │
    │─Compares to drug store distance─>│
    │                                  │
    │                                  │ ┌──────────────────────────────────────────────┐
    │                                  │ │ Drug store distance is just peanuts to space │
    │                                  │ └──────────────────────────────────────────────┘
    │                                  │

Remark: Since the result is usually given in Markdown code fences, we did not make a pipeline to plot the diagram. We used two shell commands in order to observe the intermediate result.

Remark: The default object identifier for both llm-chat and llm-chat-object is “NONE”.

Copy-editing

Here is a very practical example — this document was copy-edited with the prompt “CopyEdit” using the following commands:

cat Chatnik-LLM-Host-in-the-Shell-Part-1.md | llm-chat -i=ce --prompt=@CopyEdit --model=gpt-5.4-mini --max-tokens=16384
llm-chat-meta -i=ce last-message > Chatnik-LLM-Host-in-the-Shell-Part-1_edited.md
open Chatnik-LLM-Host-in-the-Shell-Part-1_edited.md

(And, yes, the LLM copy-edited version was evaluated, and some edits were rejected.)


Why make another LLM-CLI system?

Some questions to answer

Why do it?

Most LLM interfaces — both “big” popular ones and those built by developers experimenting with LLMs — default to an application-centric design: a closed interaction loop with implicit state. This pattern is convenient, but very limiting. It can be cynically seen as an intentional effort for user lock-in or just as an attempt to impose certain user-experience views. It works against the “freedom enabling” Unix design principles. (Such as composability, transparency, and scriptability.)

With “Chatnik”, instead of adapting workflows to fit an LLM application, LLM capabilities are brought into the shell as first-class primitives. This enables reuse of existing tooling (pipes, redirects, scripts) and aligns LLM interaction with long-established UNIX practices.

Why was it relatively easy to do?

“Chatnik” is a composition of existing capabilities rather than a ground-up implementation:

Remark: Related to the last point above, the following quote is attributed to Ken Thompson about UNIX:

We have persistent objects, they’re called files.

Remark: Less obnoxiously, instead of saying that LLM providers expose messy, non-uniform APIs, we can say that their APIs “are individually reasonable, but collectively inconsistent.” Because of the popularity of OpenAI’s models, many LLM providers adhere to a degree with OpenAI’s API. Still, the APIs — collectively — have inconsistent schemas, authorization, streaming, tool-calling, roles, etc.

Why is it useful?

“Chatnik” is useful because it places LLM capabilities in a natural manner into Unix shell workflows:


Architectural design

The following flowchart summarizes the computational components and their interactions fairly well:

Here is a concise narration of the flow:

Expanded narration

Chatnik is built around the principle that LLM interaction should behave like a native shell capability, not a siloed application.
A command issued in the OS shell is treated as the entry point into a composable pipeline, where LLM calls can participate alongside standard UNIX tools.

State is externalized and file-backed, not hidden in process memory.
Chat sessions are represented as chat objects that are ingested from and persisted to the file system.
This makes conversations durable, inspectable, and naturally versionable using existing OS tools.

Chat identity is explicit but optional.
When a chat ID is provided, the corresponding conversation is resumed; when absent or unknown, a new chat object is created.
This allows both ad-hoc interactions and long-lived conversational contexts without friction.

Prompting is treated as a programmable layer.
Inputs are not passed directly to models; they are first parsed through a lightweight DSL.
Known prompts are expanded from a prompt repository, enabling reuse, parameterization, and standardization of interactions.

LLM invocation is abstracted but not obscured.
Evaluation is delegated to “LLM::Functions”, which provides a uniform interface over multiple providers, including OpenAI (ChatGPT), Google (Gemini), and Ollama.
This keeps provider choice flexible while preserving a consistent workflow.

The system is designed for composability and integration.
Each stage—state ingestion, prompt processing, evaluation, and persistence—can be understood as part of a pipeline.
This makes LLM interactions scriptable, chainable, and interoperable with existing command-line utilities.

Persistence is a first-class outcome of every interaction.
Every evaluation both returns a result to the shell and updates the underlying chat object store, ensuring that conversational context evolves incrementally and reliably.

In short. To reiterate the point in the introduction, “Chatnik” treats LLMs as shell-native, stateful, and programmable primitives —
aligning conversational AI with the philosophy of UNIX pipelines rather than application-bound interfaces.


Related and alternative packages

In this section, we point to Raku packages that are both ingredients of, and alternatives to, “Chatnik”.

Main ingredients

The creation and interaction LLM-chat object functionalities are provided by “LLM::Functions”, [AAp1].

Prompt collection, prompt spec DSL, and related prompt expansion are provided by “LLM::Prompts”, [AAp2]. The CLI script llm-prompt of “LLM::Prompts” can be used to examine, retrieve, and concretize prompts. For example, here it can be seen the full text of the function prompt “MermaidDiagram” with given arguments:

llm-prompt MermaidDiagram MYTEXT MY_DIAGRAM_TYPE

In some cases it is more convenient to use llm-prompt than prompt expansion. For example:

llm-chat "@CodeWriterX|Raku 2D random walk." | llm-chat -i=ch --prompt="$(llm-prompt CodeHighlighter --format=HTML)"

Underlying and alternative

Access to LLMs is provided by the packages “WWWW::OpenAI”, “WWWW::Gemini”, “WWW::MistralAI”, “WWW::LLaMA”, “WWW::Ollama”.

Each of these packages has a corresponding CLI script that is an alternative to llm-chat:

PackageCLI
WWW::OpenAIopenai-playground
WWW::Geminigemini-prompt
WWW::MistralAImistralai-playground
WWW::LLaMAllama-playground
WWW::Ollamaollama-client

Related alternatives

The package “LLM::DWIM”, [BDp1], is similar in spirit to “Chatnik”, and it is also based on the LLM packages “LLM::Functions”, [AAp1], and “LLM::Prompts”, [AAp2].

There are significant differences, however, in that “LLM::DWIM”:

  1. Has its own loop for the user-LLM chat
  2. Does not use prompt expansion
  3. Uses only one chat object
  4. Although chat history is saved, no new chat objects are created with it

The Raku package “Jupyter::Chatbook” uses the same evaluation mechanisms as “Chatnik”, but its interactive environment is a Jupyter notebook instead of an OS shell. The Python package “JupyterChatbook” and the Wolfram Language paclet “Chatbook” are also notebook alternatives to “Chatnik”.

Summarizing graph


References

Articles, blog posts

[AA1] Anton Antonov, “Jupyter::Chatbook”, (2023), RakuForPrediction at WordPress.

[AA2] Anton Antonov, “Jupyter::Chatbook Cheatsheet”, (2026), RakuForPrediction at WordPress.

[AA3] Anton Antonov, “Jupyter Chatbook Cheatsheet”, (2026), PythonForPrediction at WordPress.

Packages

[AAp1] Anton Antonov, LLM::Functions, Raku package, (2023-2026), GitHub/antononcube.

[AAp2] Anton Antonov, LLM::Prompts, Raku package, (2023-2025), GitHub/antononcube.

[AAp3] Anton Antonov, Jupyter::Chatbook, Raku package, (2023-2026), GitHub/antononcube.

[AAp4] Anton Antonov, Data::Translators, Raku package, (2023-2026), GitHub/antononcube.

[AAp5] Anton Antonov, JupyterChatbook, Python package, (2023-2026), GitHub/antononcube.

[BDp1] Brian Duggan, LLM::DWIM, Raku package, (2024-2025), GitHub/bduggan.

[CGp1] Connor Gray, et al. Chatbook, Wolfram Language paclet, (2023-2024), Wolfram Language Paclet Repository.

Videos

[AAv1] Anton Antonov, “Integrating Large Language Models with Raku”, (2023), The Raku Conference 2023 at YouTube.


Rakudo compiler, Release #192 (2026.04)

Published on 2026-04-25T00:00:00

2026.16 Selkie TUI Framework

Published by librasteve on 2026-04-21T18:33:32

Post Image: Carolyn Emerick, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons

Matt’s Corner

Matt Doughty has served up a double helping of Selkie this week. This is a TUI (Terminal User Interface) module written in Raku that provides a simple, declarative way to roll your own TUI app in Raku. Please do check it out – I look forward to seeing a crop of Raku TUIs to feature in the weekly.

and

The pitch: you build a widget tree declaratively, mutate state, and the framework handles dirty tracking, rendering, focus cycling, resize, and teardown. Native performance without the pain.

Sizing uses a fixed/percent/flex model similar to CSS flexbox. There’s an optional re-frame style reactive store if you want centralized state with event dispatch, effect handlers, and path subscriptions.

Ships with a decent widget set out of the box: text inputs (single and multi-line), scroll views, list views, card lists, tables with sortable columns, tab bars, modals, progress bars, a command palette, file browser, image display via pixel blitter, and theming throughout.

Seven example apps in the repo covering everything from a minimal counter to a tabbed dashboard.

Also includes a testing toolkit (keystroke synthesis, Supply observation, store assertions, snapshot testing against a headless notcurses instance) so you can test widget behaviour without a terminal.

Richard’s Corner

Richard Hainsworth says

I am really pleased to let you know that finally Damian Conway and I have significantly upgraded the specification of RakuDoc V2, so much so that I am calling it informally v2+. Elizabeth Mattijsen has added extra directives into RakuAST to accommodate these upgrades. I have upgraded Rakuast::RakuDoc::Render so that it now handles the whole RakuDoc V2 specification, and bumped the module version to v1.0.0.

There is a docker image (https://docker.io/finanalyst/browser-editor) that will serve localhost html to a browser and can be used to edit and evaluate RakuDoc. There is also a docker image (https://docker.io/finanalyst/rakudoc_browser) that will do the same for a web-based version.

For members of the Raku community, I have the web based version running at https://raku.finanalyst.org/rakudoc_editor/ This URL can be shared with the Raku community in the weekly, but not more widely.

I encourage anyone who is interested in Rakudoc to test drive the great new enumeration features at the links above. Awesome work by Richard, Damian and Elizabeth.

TPRC Submit your Talk

Don’t Miss the Perl and Raku Conference 2026 in Greenville, SC
SAVE THE DATES! Friday through Sunday, June 26-28

Registration is open: https://tprc.us/tprc-2026-gsp

Weekly Challenge

Weekly Challenge #370 is available for your joy.

Raku Tips ‘n Tricks

This week, Anton Oks has proposed that we focus on some cool Raku from over 200 examples featured on https://rosettacode.org/wiki/Category:Raku. This is his contribution to get the ball rolling (disclosure he is the author of this entry)…

#| Recursive, parallel, random pivot, single-pass, quicksort implementation
multi quicksort(\a where a.elems < 2) { a }
multi quicksort(\a, \pivot = a.pick) {
    my %prt{Orderis default([]) = a.classify: * cmp pivot;
    my Promise $less = start { samewith(%prt{Less}) }
    my $more = samewith(%prt{More});
    await $less andthen |$less.result, |%prt{Same}, |$more;
}

This code shows off a cool set of Raku features:

Very nice – thanks to Anton for creating some aspirational[*] Raku.

Your contribution is welcome, please make a gist and share via the #raku channel on IRC or Discord.

Comments About Raku

New Doc & Web Pull Requests

New Raku Modules

Updated Raku Modules

Winding down

[*] Aspirational. Just to clarify, it is not that we should all aspire to that style, but that if this is a style you want – a taut, compact routine that engages all facets of your language, then Raku excels. Other coding styles are available in Raku.

Anton shared these words with me:

… I believe no other language can do that so short and still readable. 😉 Raku is so much fun to work with – I do not understand why it is not much more popular …

Please keep staying safe and healthy, and keep up the good work! Even after week 64 of hopefully only 209.

~librasteve

P.S. Here is why I think Raku is so much fun…

Asking “Why wasn’t Shakespeare a German?” can be reframed as a question about language: why did his genius emerge in English rather than in German? One answer lies in the remarkable flexibility and absorptive quality of English during the late Renaissance. By Shakespeare’s time, English had already drawn heavily from Latin, French, and other languages, creating a hybrid vocabulary that allowed for nuance, invention, and wordplay. Shakespeare could shift easily between high and low registers, coin new expressions, and layer meanings in ways that felt natural within this fluid linguistic system.

German, by contrast—though rich and expressive—was less standardized in the 16th century and often more structurally rigid. Its grammatical complexity and regional fragmentation made it harder to achieve the same kind of rapid, playful experimentation that Shakespeare employed. Where English encouraged improvisation and borrowing, German tended to preserve clearer boundaries within its forms, limiting linguistic elasticity. Shakespeare’s writing thrives on ambiguity, puns, and tonal shifts, and these qualities depended on a language as adaptable as English.

2026.15 Hugs & Busses

Published by librasteve on 2026-04-13T17:59:27

Post image attribution: Eddie Leslie from Lancashire, CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0, via Wikimedia Commons

Whateverables Corner

This week sees the release of 3 (yes, 3) new Whateverables.

The Whateverables are a collection of IRC bots primarily useful for Raku developers. They are written in Raku and are based on IRC::Client. Many of the use cases involve running Raku code using pre-built versions of the Rakudo compiler for each commit.

They work over the IRC-Discord bridge too mostly (make sure you are in the main #raku channel and note that sometimes the gremlins can get under the bridge – or is that the trolls).

Oh – and have something for the weekly? Then the Notable bot can be fed using weekly: my hot news

TPRC Submit your Talk

Don’t Miss the Perl and Raku Conference 2026 in Greenville, SC
SAVE THE DATES! Friday through Sunday, June 26-28

Registration is open: https://tprc.us/tprc-2026-gsp

Weekly Challenge

Weekly Challenge #369 is available for your giggles.

Raku Tips ‘n Tricks

This week, my eye was caught by an interesting post on Quora by Jan M Savage: Why do we use variables in programming languages?

Variables capture data that we can re-use, but more importantly, judicious use of variables lets our code tell a story.

sub sec-parser($secs) {
    my $rsecs =   $secs mod 60;
    my $rmins =  ($secs div 60) mod 60;
    my $rhrs  = (($secs div 60) div 60) mod 24; 
    my $days  = (($secs div 60) div 60) div 24;
    (:$days, :$rhrs, :$rmins, :$rsecs);
}

say sec-parser(240_000);    # (days => 2 hrs => 18 mins => 40 secs => 0)

Of course, you’ve made use of variables (declared with my) in order to capture data, but: your code is not telling a story since the variables are poorly named; pray what is rhrs???

Jan will show you how to write the same function such that all the div and mod operations occur underground (so to speak), so that all you can see is what matters:

sub sec-parser($total-sec) {
    my ($secs, $mins, $hrs, $days= $total-sec.polymod(60, 60, 24);
    (:$days, :$hrs, :$mins, :$secs);
}

say sec-parser(240_000);    #(days => 2 hrs => 18 mins => 40 secs => 0)

This code shows off a cool set of Raku features:

Very nice – thanks to Jan for combining great advice on variables and showing off some Raku.

Your contribution is welcome, please make a gist and share via the #raku channel on IRC or Discord.

Questions About Raku

Comments About Raku

New Doc & Web Pull Requests

New Raku Modules

Updated Raku Modules

Winding down

As we say in London, it’s like you wait forever for a bus and then 3 come along all at once. So it is with the spasm of activity this week on Whateverables (the last updates to that wiki were in 2023 … and 2020). Thanks to the authors for reviving the fun.

Please keep staying safe and healthy, and keep up the good work! Even after week 63 of hopefully only 209.

~librasteve

2026.14 Trim Flip-flops

Published by librasteve on 2026-04-06T18:45:19

Habere’s Corner

Habere-et-Dispertire has shared some new notes on using the raku command:

raku -ne "put .trim" {{/path/to/file}}

raku -ne '.put if "START" ff "STOP"' {{/path/to/file}}

raku -ne '.put if "START" ^ff^ "STOP"' {{/path/to/file}}

TPRC Submit your Talk

Don’t Miss the Perl and Raku Conference 2026 in Greenville, SC
SAVE THE DATES! Friday through Sunday, June 26-28

Registration is open: https://tprc.us/tprc-2026-gsp

Weekly Challenge

Weekly Challenge #368 is available for your entertainment.

Raku Tips ‘n Tricks

This week, my eye was caught by what looked like a simple question on the IRC (Discord) channels:

my (%a, @b, %c, @d= (1, 2).map({ |({"a" => 1}, [1]) });

Why does this seeming straightforward list assignment with = (see docs) cause an error?

Odd number of elements found where hash initializer expected:
Found 5 (implicit) elements: ...

Let’s tease that apart, say we make the LHS just a single Array @b and then a single Hash %a

my @b = (1, 2).map({ |({"a" => 1}, [1]) }); 
#OUTPUT say @b; #[{a => 1} [1] {a => 1} [1]]

my %a = (1, 2).map({ |({"a" => 1}, [1]) });
#ERROR

The docs say that list assignment is governed by the LHS – an Array will take all the RHS elements. We can surmise that a Hash will take all the RHS elements – even splitting key, values – and translate them into key => value pairs. Thus the error if presented with an odd number in total.

Despite this frustration, Raku has two ways to make this work:

my (%a, @b, %c, @dZ= (1, 2).map({ |({"a" => 1}, [1]) });
#-or-
my (%a, @b, %c, @d:= (1, 2).map({ |({"a" => 1}, [1]) });

say {:%a,:@b,:%c,:@d}
#OUTPUT {a => {a => 1}, b => [1], c => {a => 1}, d => [1]}

You can use binding := instead and since the LHS and the RHS have the same data structure this will work. Or you can combine Z (the zip metaoperator) with = (item assignment), like this:

my (a, b, c) Z= (a, b, c);
#turns that assignment into boring
my a = @x[0]; my b = @x[1]; my c = @x[2]

Your contribution is welcome, please make a gist and share via the #raku channel on IRC or Discord.

Questions About Raku

Comments About Raku

New Doc & Web Pull Requests

New Raku Modules

Updated Raku Modules