Technology deep dive
CPE
Conditional Parallel Execution is a new primitive built on top of Solana's Sealevel runtime. This page explains the technical foundations, the CPE proof structure, and how it integrates with existing infrastructure.
01 -- Sealevel Runtime Internals
Parallel Processing on Solana
Solana's Sealevel runtime is the only production blockchain runtime capable of processing transactions in parallel. Unlike EVM-based chains where transactions execute sequentially, Sealevel identifies transactions with non-overlapping account sets and schedules them across multiple CPU cores simultaneously.
The key insight: Solana transactions must declare all accounts they will read from or write to upfront. This allows the runtime to perform dependency analysis before execution.
// Sealevel Account Locking Model
struct Transaction {
accounts: Vec<AccountMeta>,
instructions: Vec<Instruction>,
}
// AccountMeta declares access pattern
struct AccountMeta {
pubkey: Pubkey,
is_signer: bool,
is_writable: bool,
}
// Lock rules:
// - Writable accounts: exclusive lock (one tx at a time)
// - Read-only accounts: shared lock (many txs simultaneously)
// - No overlap in write sets = parallel execution eligible
Thread Scheduling
The Sealevel scheduler groups transactions into batches based on account dependencies. Transactions within the same batch have non-overlapping write sets and can execute on separate threads. Each thread processes its transactions independently.
Account Locking
Before execution, Sealevel acquires locks on all accounts a transaction touches. Write locks are exclusive; read locks are shared. Two transactions with overlapping write accounts must execute sequentially. Non-overlapping transactions execute in parallel.
02 -- Sealevel Thread Model
Multi-threaded Transaction Processing
Sealevel distributes transactions across multiple hardware threads. Each validator node runs N worker threads (typically matching CPU core count). The scheduler performs a topological sort of the transaction dependency graph and assigns independent subgraphs to separate threads.
The thread model is the foundation of Solana's throughput advantage. While Ethereum processes ~15 TPS on a single thread, Solana processes thousands of TPS across many threads. CPE leverages this by ensuring your transactions always land on separate threads.
// Sealevel Thread Model (simplified)
struct SealevelScheduler {
threads: Vec<WorkerThread>,
lock_table: HashMap<Pubkey, LockState>,
}
fn schedule(txs: Vec<Transaction>) -> Vec<ThreadBatch> {
// 1. Build dependency graph from account declarations
let graph = build_dependency_graph(&txs);
// 2. Find connected components (groups of dependent txs)
let components = graph.connected_components();
// 3. Assign each independent component to a thread
components.iter().enumerate().map(|(i, comp)| {
ThreadBatch {
thread_id: i % self.threads.len(),
transactions: comp.clone(),
}
}).collect()
}
// Key insight: CPE transactions form their own connected
// component with zero edges (no dependencies between them)
// This guarantees they are assigned to SEPARATE threads
Thread Count
Typical validators run 8-16 worker threads. A CPE bundle of 4 transactions uses 4 threads simultaneously, leaving remaining threads for other slot transactions.
Lock Granularity
Locks are per-account, not per-transaction. This means two transactions can share read locks on the same account while maintaining exclusive write locks on different accounts.
Slot Boundaries
All transactions within a CPE bundle execute within the same slot (~400ms). The Sealevel scheduler guarantees this when transactions arrive together in the same block.
03 -- Account Lock Contention Analysis
Understanding Contention
Account lock contention is the primary reason transactions fail to execute in parallel. When two transactions compete for a write lock on the same account, the scheduler must serialize them. CPE pre-validates that no such contention exists in your bundle.
// Contention Analysis Algorithm
fn analyze_contention(bundle: &CpeBundle) -> ContentionReport {
let mut contention_map: HashMap<Pubkey, Vec<usize>> = HashMap::new();
// Map each writable account to the txs that touch it
for (i, tx) in bundle.transactions.iter().enumerate() {
for acc in tx.writable_accounts() {
contention_map.entry(acc).or_default().push(i);
}
}
// Identify hot accounts (touched by more than 1 tx)
let hot_accounts: Vec<_> = contention_map
.iter()
.filter(|(_, txs)| txs.len() > 1)
.collect();
ContentionReport {
is_parallel_eligible: hot_accounts.is_empty(),
hot_accounts,
contention_score: compute_score(&contention_map),
}
}
Common Contention Sources
-- Token mint authority accounts
-- Shared pool state accounts (AMM reserves)
-- Global config PDAs
-- Fee collection accounts
Resolving Contention
-- Use separate token accounts per transaction
-- Route through different pool instances
-- Split operations across non-overlapping PDAs
-- CPE SDK provides automatic contention resolution hints
04 -- CPE Validation
Verifying Parallel Execution Eligibility
When a CPE bundle is submitted, MonoGirl performs rigorous validation to ensure all transactions can execute in parallel. This involves analyzing the account sets of every transaction in the bundle and verifying zero write-set overlap.
// CPE Validation Algorithm
fn validate_cpe_bundle(bundle: &CpeBundle) -> Result<CpeProof> {
let txs = &bundle.transactions;
// Step 1: Extract write account sets
let write_sets: Vec<HashSet<Pubkey>> = txs
.iter()
.map(|tx| extract_write_accounts(tx))
.collect();
// Step 2: Pairwise overlap check
for i in 0..write_sets.len() {
for j in (i+1)..write_sets.len() {
let overlap = write_sets[i].intersection(&write_sets[j]);
if overlap.count() > 0 {
return Err(CpeError::OverlappingAccounts);
}
}
}
// Step 3: Generate parallel execution proof
Ok(generate_cpe_proof(bundle, &write_sets))
}
Key Property
If validation passes, the Sealevel runtime is guaranteed to schedule these transactions for parallel execution when they appear in the same slot. This is not probabilistic -- it is a deterministic property of the account locking model.
05 -- CPE Proof Structure
The Proof Object
A CPE proof encapsulates all the information needed to verify that a set of transactions was eligible for and executed in parallel. It contains the account set analysis, the slot assignment, and a cryptographic commitment to the execution state.
// CPE Proof Structure
struct CpeProof {
// Bundle metadata
bundle_id: Hash,
transaction_count: u8,
slot: u64,
// Account analysis
write_sets: Vec<Vec<Pubkey>>,
read_sets: Vec<Vec<Pubkey>>,
overlap_matrix: [[bool; MAX_TXS]; MAX_TXS],
// Execution proof
parallel_eligible: bool,
execution_hash: Hash,
timestamp: i64,
// $MONO burn record
mono_burned: u64,
burn_signature: Signature,
}
Overlap Matrix
An NxN boolean matrix where entry [i][j] indicates whether transactions i and j have overlapping write accounts. For a valid CPE proof, all off-diagonal entries must be false.
Execution Hash
A cryptographic hash combining the bundle ID, slot number, and post-execution state of all transactions. Serves as an immutable record that parallel execution occurred.
06 -- CPE Proof Verification
Independent Verification
Any party can independently verify a CPE proof without trusting MonoGirl. The verification algorithm reconstructs the account dependency analysis from on-chain data and confirms that the transactions in the proof were indeed eligible for parallel execution in the claimed slot.
// CPE Proof Verification Algorithm
fn verify_cpe_proof(proof: &CpeProof, rpc: &RpcClient) -> bool {
// Step 1: Fetch transactions from the claimed slot
let slot_txs = rpc.get_block(proof.slot)?.transactions;
// Step 2: Confirm all bundle txs exist in the slot
let bundle_sigs = proof.transaction_signatures();
if !bundle_sigs.iter().all(|s| slot_txs.contains(s)) {
return false; // Not all txs in the same slot
}
// Step 3: Re-extract write sets from on-chain data
let actual_write_sets = bundle_sigs.iter()
.map(|sig| extract_write_accounts_from_chain(rpc, sig))
.collect::<Vec<_>>();
// Step 4: Verify zero overlap
for i in 0..actual_write_sets.len() {
for j in (i+1)..actual_write_sets.len() {
if !actual_write_sets[i].is_disjoint(&actual_write_sets[j]) {
return false; // Write overlap detected
}
}
}
// Step 5: Verify execution hash
let expected_hash = compute_execution_hash(
&proof.bundle_id, proof.slot, &actual_write_sets
);
expected_hash == proof.execution_hash
}
Trustless Verification
The verification algorithm uses only on-chain data accessible through any standard Solana RPC. No MonoGirl infrastructure is required. This makes CPE proofs independently auditable by anyone with an RPC endpoint.
07 -- Account Set Analysis Algorithm
Determining Parallelism
The account set analysis is the core algorithm that determines whether a bundle of transactions can execute in parallel. It must handle edge cases including program-derived addresses, cross-program invocations, and dynamic account resolution.
// Account Set Analysis
fn extract_write_accounts(tx: &Transaction) -> HashSet<Pubkey> {
let mut accounts = HashSet::new();
for meta in &tx.message.account_keys {
if meta.is_writable {
accounts.insert(meta.pubkey);
}
}
// Include CPI-derived accounts
for ix in &tx.message.instructions {
let cpi_accounts = resolve_cpi_accounts(ix);
accounts.extend(cpi_accounts);
}
accounts
}
// Complexity: O(n^2 * m) where n = tx count, m = avg accounts per tx
// For typical CPE bundles (2-8 txs), this is sub-millisecond
Edge Cases
-- Program accounts (executable): always read-only, never block parallelism
-- System program: shared across all txs, always read-only lock
-- PDAs with same seeds: resolved to same pubkey, caught by overlap check
-- CPI targets: recursively resolved and included in write set analysis
08 -- Theoretical Benchmarks
Performance Characteristics
CPE validation is designed to be computationally lightweight. The account set analysis runs in sub-millisecond time for typical bundles, adding negligible overhead to the transaction submission pipeline.
Bundle Size
Validation Time
Proof Size
Latency Savings
2 transactions
~0.1ms
~256 bytes
50% (2x faster)
4 transactions
~0.3ms
~512 bytes
75% (4x faster)
8 transactions
~0.8ms
~1024 bytes
87.5% (8x faster)
Note on Benchmarks
These are theoretical benchmarks based on account set analysis complexity. Actual performance depends on network conditions, validator load, and slot availability. Latency savings represent the best case where all transactions would otherwise execute sequentially.
09 -- Integration with Solana Infrastructure
Fitting into the Stack
MonoGirl CPE operates as a layer between the user and the Solana validator. It does not modify the runtime or consensus -- it leverages existing Sealevel properties to provide guarantees that were always theoretically possible but never practically accessible.
// Integration Architecture
Layer 4: Application -- DeFi protocols, traders, searchers
Layer 3: MonoGirl CPE -- Validation, proof generation, $MONO burn
Layer 2: RPC / Relay -- Transaction submission to validators
Layer 1: Sealevel Runtime -- Account locking, thread scheduling
Layer 0: Solana Consensus -- PoH, Tower BFT, slot production
No Runtime Changes
CPE works entirely within existing Sealevel semantics. No validator modifications, no consensus changes, no forks required.
RPC Compatible
CPE bundles are submitted through standard RPC endpoints. Compatible with any Solana RPC provider including Helius, QuickNode, and Triton.
Composable
CPE can be combined with Jito bundles. Use Jito for ordering between groups, and CPE for parallelism within groups.