Autolang Runtime / VM
A deep dive into the high-performance, deterministic execution environment of Autolang.
Design Philosophy
The Autolang Runtime is designed with three core pillars that distinguish it from heavy VMs like JVM or .NET:
Zero Magic
No hidden background threads, no stop-the-world GC, and no JIT warmup. Execution is 100% deterministic.
Embeddability First
The VM is a library, not an operating system. It integrates seamlessly into C++ host applications.
Predictable Performance
Memory usage and execution paths are clear, making it suitable for real-time and embedded constraints.
Execution Model: The "Dumb VM"
We follow the "Smart Compiler, Dumb VM" architecture. The VM knows nothing about source code, syntax sugar, or parsing logic โ it purely executes data structures passed down from the compiler. This keeps the VM extremely lightweight (~few hundred KB).
Current: Stack-Based
Instructions pop operands from the stack, perform operations, and push results back. This simplifies compiler implementation and keeps bytecode compact.
Future: Register-Based
To achieve native-like performance on single-core chips, we plan to transition to a Register-Based VM โ reducing total instruction count and better utilizing CPU cache locality.
Stack & Frame Management
Because the VM is single-threaded, we can use aggressive stack optimizations that are impossible in multi-threaded VMs without locking.
Pre-calculated Frames: The compiler calculates exactly how much stack space a function needs. The VM allocates the frame in O(1).
Sequential Allocation: Uses a specialized StackAllocator that is strictly sequential โ no locks required.
Instant Reclamation: When a function returns, the stack pointer is simply moved back. No GC scan required.
Dynamic Stack Resizing
If recursion depth exceeds the current stack size, the VM automatically reallocates to a larger memory region. Crucially, it can also shrink back to save RAM on embedded devices when deep recursion ends.
Memory Management Strategy
Autolang uses a hybrid approach to memory, balancing speed and safety.
AreaAllocator & ObjectManager
Heap allocation is handled by AreaAllocator, which grabs large contiguous blocks of memory from the OS and sub-allocates them sequentially. The ObjectManager sits on top to manage object lifecycles. Because memory is allocated contiguously upfront, even if individual objects leak, the entire arena is reclaimed at the end of execution โ making catastrophic memory leaks structurally impossible.
Primitive Caching
To reduce allocation pressure, common immutable objects like small Int and Float values are cached and reused by the ObjectManager.
Deterministic Deallocation
Reference Counting: Objects are reclaimed immediately when their ref-count hits zero.
No GC Pauses: There is no background garbage collector thread. You never get random latency spikes.
Runtime Type System
At runtime, all heap objects are represented by a unified structure: AObject.
// Unique Type ID
uint32_t classId;
// Reference counting
uint32_t refCount;
// Flags: IS_CONST, IS_FREE
uint32_t flags;
union { data } // 8 bytes
} // 20 bytes total
Fast is checks: Type checking is just an integer comparison.
Monomorphization: The compiler and VM work together to generate specialized bytecode for generic types (e.g., INT_PLUS_INT vs FLOAT_PLUS_FLOAT).
