The Simplest Message Bus (in python)

I came up with this awhile ago when I was playing with cooperative LLM agents. It is the simplest implementation of a message bus that I could think of. The architecture is very simple. Let's start with the following barebones python. def main(): print("Hello, world.") if __name__=="__main__": main() Let's begin by looking at how we interact with our bus. In the main() function, add the following: bus = MessageBus() def callback(message: str): print(f"Received message '{message}'") Now we need to sort out the bus.

The Simplest Message Bus 2, Agents

So we have the simplest message bus from here, but now I'd like to actually do something interesting with it. I'd like to be able to send information from one process to another. We'll call a process that can produce and consume messages from the bus an "Agent". I think people also call these "Actors"? Maybe I should just call them actors, too. :D Let's start with the main function.

The Simplest Message Bus, Part 3 - Async Stuff

Last time we set up simple synchronous messaging with an Actor object. This time we want to have that Actor running in another thread. I've had a brain drizzle. In ASP.NET apps there is the concept of middleware. Basically middleware sits between an HTTP (or other) request and the endpoint that you've implemented. So we could implement "middleware" for our messaging bus by adding in "filter" =Actor=s where the bus deliberately routes messages through the filters before the messages are sent on to their destination.

Using Svelte and ASP.NET Together

I heard from the kind folks over on the C# discord that using Razor or Blazor for web facing applications is a bad idea. Apparently neither are really ready for the task; something to do with performance issues on both counts? I had been learning razor because I want to build a web application – just learning, you know? – and it's a pretty sweet templating language that lets you inline C# straight into the web page.

Shaving the yak, a grammar parser-generator for modified EBNF, Part 1

I decided to shave a yak. I've been working my way through Bob Nylund's excellent Crafting Interpreters and I got as far as chapter 7 in which you write a parser for your grammar. In the chapter, Bob discusses the grammar that we are to implement in the book. I decided, on a lark, that I would write a code generator that takes (something similar to) Bob's grammar spec and spits out serviceable C# code that can parse the grammar.

Setting up Nginx Proxy Manager, Autorun Using SystemD, Run as User

I need a reverse proxy and to be honest, I'd like it to be as simple as possible. NPM [0] looks really simple. I'd also like to not have to worry about restarting containers when my server goes down. docker-compose.yml [0] We begin with the following docker-compose.yml: version: '3.8' services: npm: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' - '81:81' - '443:443' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt Running services as a user Let's not use root to run the services.

Setting Up Owncloud Infinite Scale Using Docker

There are instructions for setting up a tech demo at [0], but these instructions result in a complaint about not having jwt_something not set up. A redditor pointed me towards [1], which seems to be a more complete document. Demo Setup Grab the ocis image. docker pull owncloud/ocis Create some necesary folders. We'll use the first to store the server config information. We'll use the second to persist the data the server produces.

Home Server SSH Setup

Set up SSH on a headless server on your LAN. Installation pacman -S openssh Setup Add a user and network specific rules to sshd config at /etc/ssh/sshd_config. Also search for user specific auth files. AllowUsers web Match 192.168.0.* AllowUsers web root PermitRootLogin yes Match all ... AuthorizedKeysFile .ssh/%u_authorized_keys ... And enable the sshd service. systemctl enable sshd.service && systemctl start sshd.service We also need a public/private key pair to SSH in with.

Working with VMs and KVM on Arch Linux

I found that I needed a fresh Windows install to test some documentation against. I've not managed VMs on Linux in a long time. The last time I did this I used VirtualBox and QEMU. Dependencies I'll be using QEMU and `virt-manager` [0]. The documentation says to install the following packages: virt-manager, qemu-desktop, dnsmasq and iptables-nft. I found later on that I also needed some utilities for managing VMs (I ran out of space and needed to expand one of the virtual drives.

Running a LLM Locally Using Llama.cpp and a HuggingFace Model

So you've heard about this AI language model thing and you're curious about how to run an AI model on your computer. You've never done this before and it sounds at least interesting! Cool! I've never done this before, either! Get an inference engine, llama.cpp. First, let's grab the llama.cpp github repository [0]. git clone git@github.com:ggerganov/llama.cpp.git And build the repo using make. cd llama.cpp && make This will take a few minutes.