ALPHA

NuraScript
Token-Efficient Language for LLMs

A minimal, high-performance language designed specifically for AI code generation. Achieve 78.1% token reduction while maintaining full expressiveness.

Why NuraScript?

🎯

Token Optimized

Every operation is designed to minimize token count. Single-character operators and prefix notation eliminate unnecessary tokens.

Lightning Fast

4.58x more token-efficient than Python. Process more code in the same context window, reducing costs and improving speed.

🧠

LLM-Friendly

Designed from the ground up for AI code generation. Predictable syntax, explicit typing, and minimal ambiguity.

🔧

Minimal Keywords

Only 5 keywords: fn, let, call, if, loop. Simple to learn, powerful to use.

📐

Prefix Notation

S-expressions eliminate operator precedence ambiguity. Every operation is clear and unambiguous.

🎨

Type Safety

Explicit type prefixes (i, f, s, b) ensure deterministic typing without verbose syntax.

Code Examples

See how NuraScript compares to Python

Simple Addition

Python
x = 5
y = 10
result = x + y
NuraScript
(let ix 5)
(let iy 10)
(let iresult (+ ix iy))

Function Definition

Python
def add(a, b):
    return a + b
NuraScript
(fn add (ia ib) (+ ia ib))

Conditional Logic

Python
if x > 0:
    result = x * 2
else:
    result = 0
NuraScript
(if (> ix 0) 
  (let iresult (* ix 2)) 
  (let iresult 0))

Loops

Python
for i in range(0, 10):
    print(i)
NuraScript
(loop ii 0 10 (call print ii))

Fibonacci Sequence

Python
def fibonacci(n):
    if n < 2:
        return n
    return fibonacci(n - 1) + fibonacci(n - 2)
NuraScript
(fn fibonacci (in) 
  (if (< in 2) 
    in
    (+ (call fibonacci (- in 1)) 
       (call fibonacci (- in 2)))))

Beyond Code: Language Support

NuraScript's token-efficiency principles extend to English and other languages

The same design principles that make NuraScript 78% more token-efficient for code can be applied to English documentation, comments, and prompts. This enables token-efficient communication with LLMs while maintaining semantic clarity.

Core Principles Applied to Language

📝

Minimal Keywords

Replace verbose phrases with compact tokens. "Make sure to"[req] (3 tokens → 1)

🔤

Symbol Substitution

Use symbols for common concepts. "for each"@, "error"!

📋

Structured Format

Organize instructions in compact, unambiguous notation using brackets and prefixes.

🌐

Multi-Language

These principles can be adapted to any language, not just English, for global token efficiency.

Example: Token-Optimized English

Standard English ~30 tokens
Please write a function that takes two numbers and returns their sum. Make sure to handle edge cases and include error handling.
Token-Optimized ~8 tokens
[task:fn] [in:2nums] [out:sum] [req:edge+err]
73% reduction
Standard English ~40 tokens
Create a Python class called UserManager that has methods for adding users, removing users, and listing all users. The class should use a dictionary to store user data.
Token-Optimized ~12 tokens
[class:UserManager]
  [methods:add_user,remove_user,list_users]
  [store:dict]
70% reduction

Benefits for Documentation & Comments

💬

Compact Comments

Write token-efficient inline documentation that LLMs understand while using minimal tokens.

📚

Efficient Docs

Documentation that fits more information in the same context window, reducing costs.

🤖

Optimized Prompts

Preprocess prompts and instructions to reduce token count by 70%+ while maintaining clarity.

🌍

Universal Application

These principles work for any language - adapt the abbreviation dictionary to your language.

70-75%
Token Reduction
For structured prompts and documentation
Any
Language Supported
Adaptable to English, Spanish, French, and more
Bidirectional
Translation
Compress for LLMs, expand for humans

Real-World Case Study

FrendlyApp Server API - Production Codebase Analysis

We analyzed a production server codebase with 45 Python files totaling 32,201 tokens. Here's what happened when we converted to NuraScript:

32,201
Python Tokens
7,034
NuraScript Tokens
0
Token Reduction
%
0
More Efficient
x
0
Tokens Saved
total

📊 Scale Impact

  • 45 Python files analyzed across API, models, services, and schemas
  • 4.58x more efficient - consistent with smaller examples
  • Largest file: messages.py - 3,495 → 765 tokens (saves 2,730)

💰 Cost Savings

If this codebase is used in LLM prompts 1,000 times:

  • Input token savings: $755.01
  • Output token savings: $1,510.02
  • Total savings: $2,265.03

🚀 Context Window Benefits

  • Python code: Doesn't fit in GPT-4 context window (8,192 tokens)
  • NuraScript code: Fits completely with room to spare
  • Enables including entire codebase in single prompts

Getting Started

Language Syntax

Type Prefixes: i (int), f (float), s (string), b (bool)
Keywords: fn, let, call, if, loop
Operators: +, -, *, /, =, !, <, >

Current Status

⚡ ALPHA - NuraScript is currently in active development. The language specification is stable, and we're continuously improving the transpiler and runtime.

NuraScript will be open source, making token-efficient AI code generation accessible to everyone.

Features include:

  • ✓ NuraScript parser and executor
  • ✓ Built-in runtime functions
  • ✓ Token counting utilities
  • ✓ Support for English documentation and comments

About Me

NuraScript is created by Nathan Sloan, a technologist passionate about AI and LLMs, and guiding their impact on technology and society.

This project explores how token-efficient language design can dramatically reduce costs and improve performance when working with large language models, while ensuring responsible and thoughtful AI development.