Skip to content
This repository was archived by the owner on Jan 4, 2026. It is now read-only.

kobolds-io/stdx

Overview

CAUTION this project has been archived. Please see https://gitlab.com/kobolds-io/stdx. See https://github.com/kobolds-io/kobolds/blob/main/migrating.md for details

This is a library adding several generally useful tools that are either not included in the standard library or have slightly different behavior. As the zig programming language matures, we should get more and more awesome std library features but until then...

All data structures, algorithms and utilities included in this library are written from scratch. This minimizes the threat of malicious or unintentional supply chain attacks. It also ensures that all code is controlled in a single place and HOPEFULLY minimizes the chance that zig turns into the hellish monstrocity that is npm and the nodejs ecosystem.

Table of Contents

  1. Overview
    1. Usage
    2. Installation
    3. Organization
    4. Examples
    5. Benchmarks
    6. Contributing
    7. Code of Conduct
  2. Documentation
    1. stdx
      1. Multithreading
        1. BufferedChannel
        2. UnbufferedChannel
        3. Signal
      2. Events
        1. EventEmitter
      3. Queues/Lists
        1. ManagedQueue
        2. UnmanagedQueue
        3. RingBuffer
      4. Memory Management
        1. MemoryPool

Usage

Using stdx is just as simple as using any other zig dependency.

// import the library into your file
const stdx = @import("stdx");

fn main() !void {
    // your code
    // ....

    const memory_pool = try stdx.MemoryPool(i32).init(allocator, 200);
    defer memory_pool.deinit();

    // your code
    // ...
}

Installation

Install using zig fetch

zig fetch --save  https://github.com/kobolds-io/stdx/archive/refs/tags/v0.2.0.tar.gz

Alternatively, you can install stdx just like any other zig dependency by editing your build.zig.zon file.

    .dependencies = .{
        .stdx = .{
            .url = "https://github.com/kobolds-io/stdx/archive/refs/tags/v0.2.0.tar.gz",
            .hash = "",
        },
    },

run zig build --fetch to fetch the dependencies. This will return an error as the has will not match. Copy the new hash and try again.Sometimes zig is helpful and it caches stuff for you in the zig-cache dir. Try deleting that directory if you see some issues.

In the build.zig file add the library as a dependency.

// ...boilerplate

const stdx_dep = b.dependency("stdx", .{
    .target = target,
    .optimize = optimize,
});
const stdx_mod = stdx_dep.module("stdx");

exe.root_module.addImport("stdx", stdx_mod);

Organization

This library follows the organization of the zig std library. You will see familiar hierarchies like stdx.mem for memory stuff and std.<DATA_STRUCTURE> for other data structures. As I build this library out, I'll add more notes and documentation.

Examples

There are examples included in this library that go over a brief overview of how each feature can be used. You can build and run examples by performing the following steps. Examples are in the examples directory. Examples are always welcome.

zig build examples

./zig-out/bin/<example_name>

Examples are best used if you modify the code and add print statements to figure out what is going on. Look at the source code files for additional tips on how features work by taking a look at the tests included in the source code.

Benchmarks

There are benchmarks included in this library that you can run your local hardware or target hardware. You can run benchmarksby performing the following steps. Benchmarks are in the benchmarks directory. More benchmarks are always welcome. Benchmarks in this library are written using zbench by hendriknielander. Please check out that repo and star it and support other zig developers.

Note Benchmarks are always a point of contention between everyone. One of my goals is to provision some hardware in the cloud that is consistently used as the hardware for all comparisons. Until then, you can run the code locally to test out your performance. These benchmarks are run inside of a virtual machine and the CPU is fully emulated. This means you will see better performance on your native machines.

# with standard optimizations (debug build)
zig build bench

# or with more optimizations
zig build bench -Doptimize=ReleaseSafe

Example output

--------------------------------------------------------
  Operating System: linux x86_64
  CPU:              13th Gen Intel(R) Core(TM) i9-13900K
  CPU Cores:        24
  Total Memory:     23.299GiB
--------------------------------------------------------

|----------------------------|
| BufferedChannel Benchmarks |
|----------------------------|
benchmark              runs     total time     time/run (avg ± σ)    (min ... max)                p75        p99        p995
-----------------------------------------------------------------------------------------------------------------------------
send 10000 items       65535    6.652s         101.51us ± 16.49us    (93.918us ... 2.684ms)       99.665us   129.869us  142.732us
receive 10000 items    65535    5.112s         78.012us ± 10.941us   (76.327us ... 1.593ms)       76.486us   102.904us  115.491us

|-------------------------|
| EventEmitter Benchmarks |
|-------------------------|
benchmark              runs     total time     time/run (avg ± σ)    (min ... max)                p75        p99        p995
-----------------------------------------------------------------------------------------------------------------------------
emit 1 listeners 10000 65535    2.108s         32.175us ± 4.119us    (31.125us ... 257.44us)      31.434us   44.197us   53us
emit 10 listeners 1000 65535    7.354s         112.226us ± 15.817us  (105.4us ... 1.803ms)        110.99us   155.984us  177.795us
emit 100 listeners 100 65535    1m4.794s       988.707us ± 47.668us  (959.537us ... 2.9ms)        995.677us  1.129ms    1.192ms

|-----------------------|
| MemoryPool Benchmarks |
|-----------------------|
benchmark              runs     total time     time/run (avg ± σ)    (min ... max)                p75        p99        p995
-----------------------------------------------------------------------------------------------------------------------------
create 10000 items     65535    11.931s        182.062us ± 16.844us  (172.281us ... 1.015ms)      183.255us  239.032us  267.836us
unsafeCreate 10000 ite 65535    9.944s         151.747us ± 46.353us  (143.466us ... 9.661ms)      150.883us  205.453us  232.292us

|-----------------------|
| RingBuffer Benchmarks |
|-----------------------|
benchmark              runs     total time     time/run (avg ± σ)    (min ... max)                p75        p99        p995
-----------------------------------------------------------------------------------------------------------------------------
prepend 10000 items    65535    2.163s         33.019us ± 16.474us   (31.864us ... 4.042ms)       32.177us   47.545us   61.212us
enqueue 10000 items    65535    2.014s         30.735us ± 8.628us    (29.787us ... 1.702ms)       30.022us   44.017us   56.32us
enqueueMany 10000 item 65535    2.055s         31.359us ± 8.313us    (29.842us ... 1.718ms)       30.663us   43.289us   54.346us
dequeue 10000 items    65535    2.07s          31.589us ± 6.167us    (30.901us ... 693.698us)     30.915us   43.223us   53.881us
dequeueMany 10000 item 65535    2.067s         31.547us ± 6.078us    (30.234us ... 942.003us)     30.842us   43.305us   52.64us
concatenate 10000 item 65535    2.127s         32.466us ± 7.377us    (30.978us ... 660.708us)     31.698us   47.277us   60.76us
copy 10000 items       65535    2.112s         32.235us ± 6.671us    (30.242us ... 600.542us)     31.537us   51.058us   63.568us
sort 10000 items       65535    28.923s        441.345us ± 118.3us   (417.625us ... 21.623ms)     444.024us  558.995us  601.388us

|-------------------|
| Signal Benchmarks |
|-------------------|
benchmark              runs     total time     time/run (avg ± σ)    (min ... max)                p75        p99        p995
-----------------------------------------------------------------------------------------------------------------------------
send/receive 10000 ite 65535    10.286s        156.967us ± 41.073us  (152.652us ... 7.118ms)      155.585us  200.04us   223.427us

-------------------------------|
| UnbufferedChannel Benchmarks |
|------------------------------|
benchmark              runs     total time     time/run (avg ± σ)    (min ... max)                p75        p99        p995
-----------------------------------------------------------------------------------------------------------------------------
send/receive 10000 ite 65535    18.64s         284.442us ± 89.213us  (274.464us ... 16.443ms)     286.887us  355.093us  391.339us

Contributing

Please see Contributing for more information on how to get involved.

Code of Conduct

Please see the Code of Conduct file. Simple library, simple rules.


Documentation

stdx

The stdx top level module. Directly contains data structures and is the parent module to modules like io and net.

Mutlithreading

BufferedChannel

added v0.0.3 as stdx.BufferedChannel

The BufferedChannel is a structure that can be used to safely transmit data across threads. It uses a backing buffer which stores the actual values transmitted. Additionally it has a very simple api send/receive and supports concepts like cancellation and timeouts.

See example and source for more information on usage.

UnbufferedChannel

added v0.0.3 as stdx.UnbufferedChannel

The UnbufferedChannel is a structure that can be used to safely transmit data across threads. It uses a Condition to notify receivers that there is new data. Additionally it has a very simple api send/receive and supports concepts like timeouts but does not currently support cancellation.

See example and source for more information on usage.

Signal

added v0.0.8 as stdx.Signal

The Signal is a structure that can be used to safely transmit data across threads. Unlike a channel, it does not require that both threads become synchronized at the same point. Think of a Signal as a way for a sender to throw a value over the fence and a receiver to pick the value at a later time (when it is convenient for the receiver). Signals are "one shots", meaning that they should only ever be used once. These structures are ideal for things like request->reply kinds of problems.

See example and source for more information on usage.

Events

EventEmitter

added v0.0.6 as stdx.EventEmitter

The EventEmitter is a tool for managing communications across callbacks. This is a very similar implementation to the nodejs event emitter class which is one of the fundemental building blocks for asynchronous events. The EventEmitter provides a simple(ish) api to register Callbacks with appropriate Contexts to be called when a specific Event is called.

See example and source for more information on usage.

Queues/Lists

ManagedQueue

added v0.0.2 as stdx.ManagedQueue

The ManagedQueue is a generic queue implementation that uses a singly linked list. It allows for the management of a queue with operations like enqueueing, dequeueing, checking if the queue is empty, concatenating two queues, and handles the allocation/deallocation of memory used by the queue. The queue is managed by an allocator, which is used for creating and destroying nodes.

See example and source for more information on usage.

UnmanagedQueue

added v0.0.2 as stdx.UnmanagedQueue

The UnmanagedQueue is a generic queue implementation that uses a singly linked list. It most closely represents the std.SinglyLinkedList in its functionality. Differing from the ManagedQueue, the UnmanagedQueue requires memory allocations to be external to the queue and provides a generic Node structure to help link everything together.

Please also see UnmanagedQueueNode which is the Node used by the UnmanagedQueue.

See example and source for more information on usage.

RingBuffer

added v0.0.1 as stdx.RingBuffer

A RingBuffer is a data structure that is really useful for managing memory in a fixed memory allocation. This particular implementation is particularly useful for a fixed size queue. Kobolds uses the RingBuffer data structure for inboxes and outboxes for when messages are received/sent through TCP connections.

See example and source for more information on usage.

Memory Management

MemoryPool

added v0.0.1 as stdx.MemoryPool

A MemoryPool is a structure that uses pre-allocated blocks of memory to quickly allocoate and deallocate resources quickly. It is very useful in situations where you have statically allocated memory but you will have fluctuating usage of that memory. A good example would be handling messages flowing throughout a system.

See example and source for more information on usage.

About

Extensions to the zig standard library

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages