Loading Resultity…

Resultity Node: your GPU, powering global AI

Run a lightweight container, receive real inference jobs, and earn rewards.

Every request served by your node helps power agents, apps, and research running on Resultity.
You're not just running code — you're fueling the next generation of decentralized AI infrastructure.

Installation

Resultity Node

Your inference engine in the decentralized cloud

Resultity Node is a lightweight desktop app that connects your machine to the decentralized cloud. It uses your GPU, CPU, memory, and storage to run models locally and communicate with the Resultity network.

Once online, your node downloads models, receives inference jobs, and keeps its state synced. It helps maintain network stability and executes tasks — earning rewards based on actual contribution.

How it works

From request to reward, the network runs every step transparently.

Inference Consumer

Launches the request

Chooses a model, sends a job to the network, and pays the fee for inference.

The node connects your GPU to the Resultity (RTITY) network. It receives jobs from the orchestrator, runs them using local models, and returns results. Models are installed and updated automatically. The node signs each task and keeps its status synced with the system.

RTITY Cloud

Routes and balances

Receives the job, applies commission, and dispatches it to an available node.

Jobs come in via our API. RTITY Cloud routes them to nodes and handles payments. Part of each payment goes to the node operator, and the remainder supports the wider ecosystem. A running node with installed models counts as active support — even without jobs — and contributes to network stability.

RTITY Node

Executes and rewards

Keeps the network running, performs the job on GPU, and earns the reward.

Want to dive deeper?

Learn how the network works, node incentives, and how to get started.

Supported Model Families

Resultity Nodes cover a wide range of transformer workloads — from chat and retrieval to vision, audio, code, and creative generation.

TestnetMainnet LaunchTokenized Mainnet

Chat & Retrieval

LLMs for dialogue, Q&A, and assistants.

Models like LLaMA-2 (7B–70B), Mistral, and OpenChat power interactive sessions, summarization, and search.

VRAM: 8 GB+ for 7B–13B, 24 GB+ for larger variants.

Voice & Transcription

Speech-to-text and audio agents.

Run Whisper, OpenVoice, and similar models for live captions, transcription, or voice cloning.

VRAM: 4 GB+ (CPU fallback available).

Vision & Multimodal

AI that sees and understands images.

Llava, MiniGPT-4, CogVLM support OCR, captioning, diagrams, and multimodal reasoning.

VRAM: 12 GB+ for reliable output.

Image Generation

Creative tools powered by diffusion.

Models like Stable Diffusion XL, Kandinsky, and Playground v2 handle art, prototypes, and batch rendering.

VRAM: 8 GB+ (16 GB+ for high-res).

Embedding & Search

Semantic search with transformer embeddings.

BGE, InstructorXL, and E5 embed text for RAG, clustering, and vector similarity.

VRAM: 4 GB+ for base models, 8–12 GB for scale.

Function Calling & Tools

LLMs with plugin-like capabilities.

OpenChat Tool, GPT4-Function and ChatML support advanced tools, tool use, and context memory.

VRAM: 16 GB+ recommended.

Code Generation

Autonomous coding assistants.

Run StarCoder, CodeLLaMA, DeepseekCoder to power completions, translations, or realtime copilots.

VRAM: 8–16 GB+ depending on model size.

RAG & Agents

Modular chains with memory and planning.

LangChain, Autogen, DSPy support smart agents that combine local models with retrieval.

VRAM: 8–24 GB+ depending on context size.

Try them all

Install the node and customize your model collection

Node Ideology

Node Ideology

Resultity is more than infrastructure. It’s a movement for open computation, where ownership, rewards, and control stay with you — the node operator.

Resultity is more than infrastructure — it’s a movement for open computation.
By running a node you own your contribution, decide your terms, and take part in building a decentralized future where compute is shared, rewarded, and governed by the community.

Explore the bigger idea ➪

Vision

Decentralized by Design

Every Resultity node is autonomous — no centralized scheduler dictates its lifecycle. You decide when to start, update, or pause.

Earn Transparently

Contributions are measured. Work is rewarded. From GPU time to storage to bandwidth — every resource earns its share.

Stay Independent

No lock-in. No custodial wallet. Your keys, your machine, your rules. Everything is verifiable on-chain and in logs.

Shape the Future

Be more than a worker. Vote on proposals, suggest new features, and help govern the evolution of Resultity as a true compute cooperative.

Design Principles of RTITY Node

From solo setups to multi-node swarms — built for simplicity, performance, and transparent rewards.

Modular Architecture

Modular Node Architecture

Each node includes a binary core, inference container, and isolated model store — ready to go out of the box.

Containerized Runtime

Containerized Runtime Environment

The compute environment is built on Docker, enabling fast deployment, isolation, and easy upgrades.

Self-Managed Models

Local Model Management

Models are downloaded, stored, and loaded locally — enabling fast start times and full offline capability.

Node Dashboard

Unified control panel for live stats, model versions, node tracking, and rewards.

The Node Dashboard is a web-based control panel you can access online to manage your entire swarm of deployed nodes, jobs, and models. It synchronizes configurations, monitors inference tasks, and balances network load in real time.

You can view job history, inspect logs, track performance metrics, and adjust traffic allocations from any browser — whether you’re on a single device or overseeing a full GPU cluster. All updates and model changes are applied automatically via Docker without interrupting running tasks.

ING FIRST
ing
ING FIRST

Resultity Node is built for fleet operators and farming enthusiasts. No command lines — just launch, monitor, and scale through a single dashboard.

Models and logic update automatically. Join the testnet and earn RCP across your entire device fleet.

Ready to participate?

Let us know that you are ready to deploy at least one node