Pine Script® indicator
Indicators and strategies
Marcador de Sesion PY (CME Liquidity)En el gráfico de 1 hora, las tres marcas de la mañana en base a la liquidez del CME y toma de operación según dicho horario (10:20, 10:50 y 11:05) se agruparán visualmente de la siguiente manera:
Las marcas de 10:20 y 10:50 aparece sobre la vela de las 10:00.
La marca de 11:05 aparece sobre la vela de las 11:00.
El script detectará automáticamente los minutos exactos y separará las líneas para otras temporalidades.
Ajuste en base al CME: Etiquetados de las 10:20 hs (Py) específicamente como "Liquidez CME", lo cual es muy útil para activos como el DXY o futuros de oro/monedas.
Pine Script® indicator
EMA Compression PRO ELITE v6.2Para mis estudiantes de la VIP PRO.
Este script trata de detectar cambios de tendencia que debes usar junto a accion del precio y bookmap
This script tries to identify swings or trend direction but needs to be used with price action and bookmap.
Pine Script® indicator
Fixed-price range nested channelsHow to Use
Markets are fractal in nature—price does not move in a straight line, but in *waves within waves*. Each move contains smaller internal structures while also forming part of a larger structure.
This indicator is designed to help you visualize and track these multiple layers of movement simultaneously.
In forex markets, certain price expansions tend to recur frequently. Common intraday ranges include:
* 10, 20, 50, 100 pips
* or percentage equivalents such as 0.2%, 0.5%, 1%
Similarly, in indices like the S&P 500, larger timeframes often respect:
* 3%, 5%, 10% moves on the daily chart
These repeating expansions are not random—they reflect the natural rhythm of market participation and liquidity.
What This Indicator Does
This tool plots nested fixed-width channels (Inner, Middle, Outer) to represent:
* Small moves (Inner band) → micro structure
* Medium moves (Middle band) → swing structure
* Large moves (Outer band) → macro structure
By doing so, it keeps short-, medium-, and large-scale price movements in perspective at all times, allowing you to better understand where price is within its broader context.
Customization
Markets behave differently depending on:
* Instrument (e.g., EURUSD vs XAUUSD vs SPX)
* Timeframe
* Volatility conditions
For this reason, users can define their own width parameters in either:
* Points (pips)
* Percentage (%)
This flexibility allows the indicator to adapt to any market or trading style.
---
How to Interpret
* Inside Inner Band → Low volatility / equilibrium
* Between Inner & Middle → Controlled movement
* Between Middle & Outer → Expansion phase
* At Outer Band → Potential exhaustion or breakout decision
This structure helps you:
* Identify mean reversion zones
* Recognize trend continuation
* Spot volatility expansion early
Best Used With
This indicator is most effective when combined with other tools such as:
* Moving Averages (EMA) for trend direction
* Oscillators (RSI, Stochastic, etc.) for momentum and divergence
* Price action / liquidity concepts for entries
It provides the context, while other tools can help refine timing.
⚠️ Important Note
This is not a signal indicator.
It is a market structure tool designed to help you:
* Understand where price is
* Anticipate how it may behave next
* Stay aligned with the dominant scale of movement
Pine Script® indicator
ICT Macro + Session High Low MarkerICT Macro + Session High Low Marker is a clean time-based ICT indicator designed to help traders visually identify important macro time windows and major session highs/lows directly on the chart.
This tool highlights key London and New York ICT Macro windows, helping traders focus on periods where liquidity sweeps, displacement, reversals, or continuation moves may occur. It also marks the Asian, London, and New York session highs and lows, which can be useful for identifying liquidity targets and possible price reactions.
Features
Highlights important ICT Macro time windows.
Includes London Macro and New York Macro sessions.
Marks Asian, London, and New York session highs/lows.
Optional session boxes for better visual structure.
Optional high/low line extension.
Customizable colors, labels, timezone, and session times.
Weekdays-only filter.
Alert conditions for macro starts and session starts/ends.
Best Used For
This indicator is useful for traders who follow ICT concepts such as:
Liquidity sweeps
Kill zones
Macro time windows
Session high/low targeting
London and New York session timing
Intraday bias confirmation
How to Use
Use the macro windows as timing zones, not automatic buy or sell signals. First identify your market bias, liquidity target, and higher-timeframe context. Then use the highlighted macro windows to watch for potential displacement, fair value gaps, reversals, or continuation setups.
The session high and low markers can help you identify where price may seek liquidity during London or New York trading hours.
Disclaimer
This indicator does not provide financial advice and does not guarantee profitable trades. It is intended for educational and analysis purposes only. Always use proper risk management and confirm setups with your own trading plan.
Pine Script® indicator
Auto Structure Lines Auto Structure Lines
Auto Structure Lines automatically detects recent swing highs and swing lows, then plots clean support and resistance levels on the chart. It is designed to help traders quickly identify nearby structure, breakout areas, rejection zones, and possible pullback levels without manually drawing every line.
The indicator uses confirmed pivot highs and lows to create resistance and support lines. Nearby levels are merged using an ATR-based tolerance so the chart stays cleaner and avoids stacking multiple lines around the same price area.
Key features:
Automatically detects pivot-based support and resistance
Extends structure lines to the right side of the chart
Stores multiple historical levels while showing only the most recent visible levels
ATR-based level merging to reduce clutter
Optional price labels for each visible level
Customizable pivot length, line style, colors, width, and label offset
Separate toggles for support and resistance
This tool is useful for identifying important price areas, planning breakout entries, watching retests, and managing risk around nearby support or resistance.
Auto Structure Lines is a visual decision-support tool. It does not generate buy or sell signals by itself and should be used with price action, volume, trend context, and your own risk management.
Pine Script® indicator
Daily MA BounceThis script helps to see the key daily EMAs and MAs on a smaller timeframe, useful to determine if a stock is being rejected at a critical ema or MA levels on the dailys while you are on a hourly timeframe
Pine Script® indicator
Pine Script® indicator
Prior Day Levels (PDH/PDL)capture prior day levels . This is useful to capture the key liquidity levels on the daily charts and helps to see these patterns on lower timeframe particularly hourly timeframe
Pine Script® indicator
Adaptive Wave Pressure Index [JOAT]Adaptive Wave Pressure Index
Introduction
Adaptive Wave Pressure Index is a normalized slope oscillator built to measure directional pressure through the relationship between regression slope and volatility. By scaling a manually calculated OLS slope with ATR, the script produces a dimensionless momentum reading that can be compared across instruments and timeframes much more cleanly than raw slope alone.
This indicator is designed for traders who want wave pressure, not just speed. It tracks directional force, smooths that force into fast and slow lines, colors the histogram using structural swing context, and adds divergence detection for potential exhaustion.
Why This Indicator Exists
Volatility-Normalized Momentum: Regression slope is scaled by ATR to improve comparability
Fast / Slow Pressure Read: Reveals acceleration and deceleration of directional force
Structure Overlay: Swing-sequence counts add context to histogram strength
Zone Framework: Overbought and oversold thresholds define pressure extremes
Divergence Layer: Flags when price reaches new extremes without matching pressure
Core Components Explained
1. Manual OLS Slope
rawSlope = f_olsSlope(regLength)
The script calculates slope directly from the last N closes rather than relying on a built-in regression shortcut. This provides more control over normalization and display logic.
2. ATR Normalization
normSlope = rawSlope / ta.atr(atrNormPeriod)
Dividing slope by ATR transforms it into a volatility-aware measure of pressure. A positive slope on a low-volatility asset and a positive slope on a high-volatility asset become more comparable after normalization.
3. Fast / Slow Pressure System
Two EMAs are applied to the normalized slope:
Fast Line: More responsive pressure state
Slow Line: More stable reference
Histogram: Spread between fast and slow, showing acceleration or fade
4. Structural Sequence Layer
The indicator also counts consecutive higher lows and lower highs in price. When structure strongly supports the current pressure direction, histogram colors intensify. This adds a valuable distinction between pressure that is statistically rising and pressure that is also structurally confirmed.
5. Divergence and Zone Logic
The script highlights:
Fast-line crosses of overbought and oversold thresholds
Fast/slow line crosses
Bullish and bearish divergences
Divergence lines are retained with a fixed cap so the pane stays readable over time.
Visual Elements
Histogram: Pressure spread with structural-intensity color logic
Fast Line: Main directional read
Slow Line: Reference pressure line
Zero Fill: Directional bias area fill
OB/OS Background: Soft zone shading for extreme pressure
Markers: Crosses and divergence markers
Dashboard: Raw slope, normalized slope, trend, structure sequence, divergence, and active zone
Input Parameters
Regression Length: Window for OLS slope calculation
ATR Norm Period: Volatility baseline used for normalization
Fast / Slow EMA: Pressure responsiveness controls
OB / OS Levels: Extreme pressure thresholds
Pivot Left / Right: Sensitivity for structural and divergence logic
How to Use This Indicator
Step 1: Read whether fast is above or below slow.
Step 2: Check the histogram to see whether pressure is expanding or contracting.
Step 3: Use the sequence readout to judge whether price structure agrees with the oscillator.
Step 4: Treat divergences as warnings that pressure may be weakening.
Step 5: Use OB/OS events to identify stretched pressure, especially after large runs.
Best Practices
Use on instruments with clean swings and sufficient range
Respect signals more when sequence direction agrees with fast/slow direction
Use divergence with structure, not by itself
Increase regression length for smoother wave pressure
Lower lengths react faster but create more noise
Indicator Limitations
Normalized slope improves comparison but does not eliminate market differences
Pressure can stay elevated in strong trends
Divergences can persist before price turns
Short settings increase false transitions
Structure counts are descriptive, not predictive
Technical Implementation
Built in Pine Script v6 using:
Manual OLS slope computation
ATR normalization
Dual-EMA pressure smoothing
Pivot-based structure counting
Capped divergence-line management
Confirmed-bar signal generation
Originality Statement
This indicator is original in the way it combines normalized regression slope, structural sequence intensity, and divergence management into a single wave-pressure framework. Its purpose is not just to show direction, but to show how forceful and how structurally supported that direction is.
Disclaimer
This indicator is provided for educational and informational purposes only. It is not financial advice. Momentum and divergence tools can fail, especially during volatile transitions. Always use proper risk management and independent confirmation.
-Made with passion by officialjackofalltrades
Pine Script® indicator
ICT Macro Time Highlighter - NY + London MacrosICT Macro Time Highlighter — NY & London Sessions
This indicator highlights key ICT macro time windows directly on your TradingView chart using New York time. It is designed to help traders visually identify important time-based liquidity windows during the London and New York sessions.
The script highlights the following macro times:
London Macro Windows
London Macro 1: 2:33 AM – 3:00 AM NY Time
London Macro 2: 4:03 AM – 4:30 AM NY Time
New York Macro Windows
8:50 AM – 9:10 AM: NY Open liquidity move
9:50 AM – 10:10 AM: Continuation or reversal window
10:50 AM – 11:10 AM: Liquidity/displacement window
11:50 AM – 12:10 PM: Lunch-time liquidity
1:10 PM – 1:40 PM: PM session setup
3:15 PM – 3:45 PM: Late-day move
Features:
Highlights London and New York ICT macro time windows.
Uses America/New_York timezone.
Automatically adjusts with New York daylight saving time.
Optional weekday-only filter.
Customizable colors for each macro window.
Optional labels at the start of every macro time.
Alert conditions for London macros, New York macros, and all macro windows.
This indicator is not a buy or sell signal. It is a time-based visual tool designed to help ICT traders focus on specific trading windows where liquidity sweeps, displacement, market structure shifts, fair value gaps, continuation moves, or reversals may occur.
Best used together with:
Liquidity sweep analysis
Market structure shift / CHOCH
Fair value gaps
Order blocks
Premium and discount zones
Higher timeframe bias
Recommended timeframes:
1-minute
3-minute
5-minute
15-minute
Use this indicator as a timing filter to support your ICT trading plan, not as a standalone trading strategy.
Pine Script® indicator
Multi-Timeframe RSI Box AdjustableMulti RSI boxes. Gives 5 minute, 15 minute 30 minute 1hr daily weekly and monthly RSI in neat little boxes on your screen.
Pine Script® indicator
Vol Regime Compass [N4]Vol Regime Compass
verb · size · regime — three words a week
A volatility-regime indicator for swing and position traders that converts each asset's own 252-day vol distribution into a verb (BUY+/BUY/REDUCE/WAIT/OBSERVE) and a sizing % (25-200%) — so DCA execution becomes a 5-second decision, not an open question.
The 5-second HUD
Every Monday morning you open the chart. You see three words:
Verb buy
Size 125%
Regime NORMAL
Verb = the action. Do it.
Size % = what fraction of your base DCA this week.
Regime = why the verb says what it says.
Everything else is diagnostic, off by default.
Five verbs. Five size bands. No dissonance.
BUY+ (125-200%): low vol or post-crash exit
BUY (75-125%): normal vol
REDUCE (50-100%): elevated vol
WAIT (25-50%): extreme vol or Critical Slowing detected
OBSERVE (0%): insufficient data
Why WAIT and never SKIP — Taleb's rule: in markets with asymmetric upside, never go to zero. Size down, don't size zero.
What makes it different
1. Asset-relative percentile, not fixed thresholds. BTC, NASDAQ, gold, copper — completely different vol distributions. A "vol > 2.0" threshold works on one asset and fails on another. Vol Regime Compass uses percentile rank against the asset's own 252-day vol history. Scale-free, transferable across assets without re-tuning.
2. Critical Slowing Down detection (Scheffer et al., Nature 2009). Before a regime change, three things rise together: variance, autocorrelation, and vol-of-vol. Sum z-scores; when sum > 2, the system is at a phase-transition boundary, and the indicator dampens sizing toward 1×. Real statistical physics.
3. Multi-timeframe coupling. The score combines vol percent-rank across D, W, M timeframes. Coherence or hesitation — the indicator decides.
4. Theta extremal-index (Beirlant 2004 EVT). Counts clusters of extreme returns, not individual events. θ→1 = independent (coherent). θ→0 = clustered (stressed). Amplification θ^ν with ν=3 validated in parameter sweep.
5. 2×2 regime overlay (trend × vol). bull-quiet, bull-noisy, bear-quiet, bear-noisy — each multiplies sizing. Bull-quiet accumulates. Bear-noisy survives.
6. Gold anchor. XAU / XAUT / GOLD get +15% conviction to activate.
7. The cloud IS the math. Cloud between price and SMA200 has direction (color) and density (alpha). High conviction = dense. Low = ghost. User-editable colors (default coral #F4A261 + lilac #E5DCEF).
Honest backtest data
Single-asset BTC walk-forward, 8.5 years (3,107 daily bars):
Strategy Sharpe Per-$ Max DD
Buy & Hold (lump) 0.33 15.76× -76.6%
Naive DCA 1.08 4.300× -74.4%
v7 vol-adjusted 1.02 4.305× -74.6%
Per-dollar edge over naive DCA: +0.13% — statistically zero. The backtest tells the truth. This is NOT an alpha engine on a single asset.
What it DOES deliver (single-asset):
Behavioral discipline — the verb removes weekly indecision
Capital efficiency on a fixed budget — distributes $X/week toward low-vol weeks
Regime awareness — you know when not to discretionarily increase
Where the multi-asset V6c production strategy DOES show edge
22-asset rotation, purged combinatorial cross-validation (CPCV 15/15) + Deflated Sharpe Ratio correction (Bailey & López de Prado 2014) for K=130 research runs:
OOS Sharpe: 1.964
DSR_ann: +0.524
Bootstrap CI lo: +0.806
COVID 2020 stress: -0.894 (clears -1.0 floor)
2026 H1 OOS: +0.81
μ / σ annual: 24.01% / 12.24%
Stress windows the system still fails (transparent)
COVID 2020: Sharpe -0.894 (clears -1.0 by 0.11, single observation, brittle)
Crypto Winter 2022: Sharpe +0.09 (extended crypto bear flips negative)
Gold-reversal: a 20%+ XAUT drawdown in live OOS turns the 85% neutral floor into a concentration killer
The system tells the truth about its limits. You operate knowing them.
How to read in 3 steps
Verb first. Do what it says.
Size second. Multiply your base DCA by the %.
Everything else is diagnostic — open the drawer only if you want to understand WHY.
For whom
Solo operator running weekly/monthly DCA on a handful of assets. Bilingual ES/EN. Wants discipline over magic. Reads statistical physics + ergodicity economics (Peters 2019 Nature Physics) without panic.
NOT for whom
Looking for directional buy/sell signals (this isn't a signal generator). Want discretionary alpha without work (the verb augments judgment, doesn't replace). Intraday trader (this is daily horizon).
Frequently Asked
Is this a trading signal indicator? No. Regime + sizing only. It tells you what regime an asset is in and what fraction of your DCA to deploy.
Does it work on stocks, crypto, forex, metals, indices? Yes — by design. Asset-relative percentile makes regime classification scale-free. Validated on BTC, ETH, XAUT, NAS100, UK100, GER40, FRA40, gold, silver, platinum, copper, equities.
What's the best timeframe? Daily. Weekly works for slower operators. Sub-daily out of scope.
Does it predict price direction? No. Regime classification only.
How was it validated? Walk-forward backtest with purged combinatorial cross-validation, 130 research runs, Deflated Sharpe Ratio correction, three stress-window evaluations.
How is it different from typical vol regime indicators? Most use ATR z-score or percentile. This adds Critical Slowing Down detection (Scheffer 2009 statistical physics) + extremal-index theta + multi-timeframe coupling + 5 verbs with size bands. Classifies + orients.
Methodology references
Critical Slowing Down: Scheffer M. et al. Nature 461, 53-59 (2009)
Ergodicity: Peters O. Nature Physics 15, 1216-1221 (2019)
Extremal index: Beirlant J. et al. Statistics of Extremes, Wiley (2004)
Asymmetric upside: Taleb N.N. Antifragile (2012)
Walk-forward + DSR: Bailey D., López de Prado M. JPM 40(5) 2014
Purged combinatorial cross-validation: López de Prado, Advances in Financial ML (2018)
Covariance shrinkage: Ledoit O., Wolf M. JMA 88(2) 2004
Disclaimer
Educational content. NOT financial advice. Past performance does not guarantee future results. The operator is responsible for decisions, sizing, and risk. The indicator gives multipliers — you decide the amount. Test on demo first.
Author
N — psychologist + HR Transformation specialist. Personal V6c system 2025-2026, walk-forward validated 130 runs with deflated Sharpe correction. Pine v6 native. Bilingual ES/EN.
Following González over Kundera: the scar is carried legibly.
Pine Script® indicator
Negative Volume Index Oscillator [ZOM]Negative Volume Index Oscillator (NVI Oscillator) is a momentum-based indicator designed to highlight underlying market activity by focusing on periods of declining volume. Built around the classic Negative Volume Index (NVI) , this tool transforms the raw data into a normalized oscillator, making it easier to interpret shifts in trend, momentum, and potential reversals.
The oscillator measures the deviation of smoothed NVI from its long-term average, presenting this relationship as a percentage-based value centered around a zero line. This allows traders to quickly identify whether price action is supported by quieter, more “ informed ” market participation, which is often associated with institutional positioning.
A configurable signal line is included to help identify momentum shifts through crossovers and crossunders. These interactions can be used to spot early trend changes or confirm continuation depending on market context.
To provide additional structure, the indicator includes optional volatility bands derived from standard deviation. These bands help identify statistically stretched conditions where price may be overextended, increasing the likelihood of mean reversion or consolidation.
The script also features built-in divergence detection, comparing oscillator movement to price action. Bullish and bearish divergences are automatically identified, offering early warning signals of potential reversals when momentum and price begin to disagree.
Key features include:
- Oscillator derived from smoothed Negative Volume Index
- Signal line with multiple moving average options
- Volatility bands for identifying extreme conditions
- Histogram visualization for momentum strength
- Automatic bullish and bearish divergence detection
- Crossover and crossunder markers for signal clarity
- Fully customizable smoothing, lengths, and visual settings
This indicator is designed to complement price action analysis by providing a deeper view into momentum behavior during low-volume conditions, helping traders identify potential turning points and confirm trend strength.
Pine Script® indicator
NeuraLib Expansion: Advanced Model LayersNeuraLib_Models is the companion model expansion for NeuraLib .
NeuraLib provides the runtime: tensors, graph execution, datasets, scalers, losses, optimizers, training, inference, and validation tools. NeuraLib_Models builds on that foundation with higher-level neural architectures that are difficult and repetitive to write by hand.
The purpose of this expansion is to keep the main NeuraLib runtime clean, compact, and general, while giving researchers ready-to-use model families for sequence learning, attention, temporal pattern extraction, and Reinforcement Learning workflows.
----------------------------------------------------------------------------------------------------------------
🔷 HOW IT FITS INTO NEURALIB
NeuraLib_Models is built entirely on top of the public NeuraLib API. It does not replace the main runtime and it does not introduce a separate training engine.
After importing NeuraLib_Models, its fluent methods become available directly on NeuraLib `Sequential` models. The expansion alias can remain unused in the layer chain.
//@version=6
indicator("NeuraLib Models Quick Start", overlay = false, calc_bars_count = 600)
import Alien_Algorithms/NeuraLib/1 as nl
import Alien_Algorithms/NeuraLib_Models/1 as models
var nl.Sequential model = nl.sequential("advanced_model")
var float qLong = na
var float qFlat = na
var float qShort = na
if barstate.isfirst
model := model
.input(array.from(8), "sequence")
.temporalConvStack(4, 2, 2, 2, 1, 1, nl.ActivationKind.relu, 0.0, "temporal")
.globalAvgPool1d(3, 2, "pool")
.duelingQHead(4, 3, nl.ActivationKind.relu, "dueling_head")
.build(nl.rng(7))
float ret0 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret1 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret2 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret3 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float atrValue = ta.atr(14)
float atr0 = close == 0.0 ? 0.0 : atrValue / close
float atr1 = close == 0.0 ? 0.0 : atrValue / close
float atr2 = close == 0.0 ? 0.0 : atrValue / close
float atr3 = close == 0.0 ? 0.0 : atrValue / close
bool ready = not na(ret3) and not na(atr3)
if ready
nl.Tensor state = nl.vector(array.from(ret3, atr3, ret2, atr2, ret1, atr1, ret0, atr0), "state_window")
nl.Tensor qValues = model.predict(state)
qLong := qValues.get1d(0)
qFlat := qValues.get1d(1)
qShort := qValues.get1d(2)
plot(qLong, "Q long", color = color.lime, linewidth = 2)
plot(qFlat, "Q flat", color = color.gray)
plot(qShort, "Q short", color = color.red, linewidth = 2)
hline(0.0, "Zero", color = color.new(color.gray, 70))
The model is still a normal NeuraLib model. You still call `.compile()`, `.trainOnBatch()`, `.predict()`, `.evaluate()`, `.getWeightsArray()`, and `.softUpdateFrom()` from the main library.
----------------------------------------------------------------------------------------------------------------
🔷 WHY THIS EXPANSION EXISTS
The main NeuraLib library is the foundation. It exposes a graph engine powerful enough to create custom architectures, but repeatedly building LSTM gates, attention projections, residual blocks, Conv1D stacks, or Transformer paths from raw graph operations would be too verbose for everyday research.
NeuraLib_Models packages those patterns into readable blocks:
Temporal models : Conv1D blocks, temporal convolution stacks, global average pooling, and global max pooling for flattened sequence inputs.
Recurrent models : LSTM and GRU blocks for compact sequence memory.
Attention models : Self-attention, multi-head self-attention, cross-attention, Transformer encoder blocks, Transformer encoder stacks, and Transformer decoder blocks.
Residual models : Residual dense blocks for deeper feedforward paths.
Reinforcement Learning heads : Q-head blocks and dueling Q-heads for action-value style outputs.
Replay utilities : Deterministic Prioritized Experience Replay for reproducible Pine research.
Sequence helpers : Positional encoding for token, sequence, and attention workflows.
----------------------------------------------------------------------------------------------------------------
🔷 PRACTICAL EXAMPLES
🔸 Temporal Conv Model With Dueling Q-Head
This pattern is useful when a flattened sequence contains recent market states and the output represents action values.
//@version=6
indicator("NeuraLib Models Temporal Q Example", overlay = false, calc_bars_count = 600)
import Alien_Algorithms/NeuraLib/1 as nl
import Alien_Algorithms/NeuraLib_Models/1 as models
var nl.Sequential qModel = nl.sequential("temporal_q_model")
var nl.WindowDataset qDataset = nl.windowDataset(8, 3, 400, "q_rows")
var float qDown = na
var float qNeutral = na
var float qUp = na
var float qLoss = na
if barstate.isfirst
nl.CompileConfig cfg = nl.compileConfig()
cfg := cfg
.presetQValues()
.optimizer(nl.adamW(0.001))
.withTrainingGate(true)
qModel := qModel
.input(array.from(8), "state_window")
.temporalConvStack(4, 2, 2, 2, 1, 1, nl.ActivationKind.relu, 0.0, "temporal")
.globalAvgPool1d(3, 2, "pool")
.duelingQHead(4, 3, nl.ActivationKind.relu, "dueling_head")
.compile(cfg)
qDataset := qDataset
.setInputScaler(nl.ScalerKind.zScore)
.setTargetScaler(nl.ScalerKind.none)
float ret0 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret1 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret2 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret3 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret4 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float atrValue = ta.atr(14)
float atr0 = close == 0.0 ? 0.0 : atrValue / close
float atr1 = close == 0.0 ? 0.0 : atrValue / close
float atr2 = close == 0.0 ? 0.0 : atrValue / close
float atr3 = close == 0.0 ? 0.0 : atrValue / close
float atr4 = close == 0.0 ? 0.0 : atrValue / close
bool rowReady = not na(ret4) and not na(atr4)
if rowReady
array features = array.from(ret4, atr4, ret3, atr3, ret2, atr2, ret1, atr1)
float downTarget = math.max(-ret0, 0.0)
float neutralTarget = math.max(0.002 - math.abs(ret0), 0.0)
float upTarget = math.max(ret0, 0.0)
qDataset := qDataset.pushRow(features, array.from(downTarget, neutralTarget, upTarget))
if qDataset.ready(48)
if barstate.islastconfirmedhistory
nl.Batch train = qDataset.trainBatch(12)
qModel := qModel.trainOnBatch(train.inputTensor, train.targetTensor)
qLoss := qModel.trainStats.lastLoss
nl.Tensor liveState = nl.vector(array.from(ret3, atr3, ret2, atr2, ret1, atr1, ret0, atr0), "live_state")
nl.Tensor scaledState = qDataset.scaleInput(liveState)
nl.Tensor qValues = qModel.predict(scaledState)
qDown := qValues.get1d(0)
qNeutral := qValues.get1d(1)
qUp := qValues.get1d(2)
plot(qDown, "Q down", color = color.red, linewidth = 2)
plot(qNeutral, "Q neutral", color = color.gray)
plot(qUp, "Q up", color = color.lime, linewidth = 2)
plot(qLoss, "Training loss", color = color.orange)
hline(0.0, "Zero", color = color.new(color.gray, 70))
Input shape `array.from(8)` represents a flattened 4 step by 2 feature sequence. The temporal stack extracts short sequence structure, pooling compresses the sequence, and the dueling head separates value and advantage paths before producing action scores. The example trains only on the last confirmed historical bar so it remains safe to paste onto long charts.
🔸 Transformer Encoder For Token Rows
Attention models are useful when each row is a token or time step, and each column is a feature dimension.
//@version=6
indicator("NeuraLib Models Transformer Encoder Example", overlay = false, calc_bars_count = 600)
import Alien_Algorithms/NeuraLib/1 as nl
import Alien_Algorithms/NeuraLib_Models/1 as models
var nl.Sequential encoder = nl.sequential("encoder_model")
var float tokenSignal = na
var float tokenContext = na
var float tokenVolatility = na
if barstate.isfirst
encoder := encoder
.input(array.from(4), "tokens")
.multiHeadSelfAttention(4, 2, true, "mha")
.transformerEncoder(4, true, 2, nl.ActivationKind.geluApprox, "encoder", 0.05, 2)
.build(nl.rng(11))
float emaValue = ta.ema(close, 21)
float atrValue = ta.atr(14)
float ret0 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret1 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret2 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float ret3 = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float emaGap0 = emaValue == 0.0 ? 0.0 : close / emaValue - 1.0
float emaGap1 = emaValue == 0.0 ? 0.0 : close / emaValue - 1.0
float emaGap2 = emaValue == 0.0 ? 0.0 : close / emaValue - 1.0
float emaGap3 = emaValue == 0.0 ? 0.0 : close / emaValue - 1.0
float atr0 = close == 0.0 ? 0.0 : atrValue / close
float atr1 = close == 0.0 ? 0.0 : atrValue / close
float atr2 = close == 0.0 ? 0.0 : atrValue / close
float atr3 = close == 0.0 ? 0.0 : atrValue / close
bool ready = not na(ret3) and not na(emaGap3) and not na(atr3)
if ready
nl.Tensor tokens = nl.vector(array.from(
ret3, emaGap3, atr3, -1.0,
ret2, emaGap2, atr2, -0.33,
ret1, emaGap1, atr1, 0.33,
ret0, emaGap0, atr0, 1.0), "tokens").reshape(array.from(4, 4))
nl.Tensor encoded = encoder.predict(tokens)
tokenSignal := encoded.get1d(12)
tokenContext := encoded.get1d(13)
tokenVolatility := encoded.get1d(14)
plot(tokenSignal, "Latest token signal", color = color.aqua, linewidth = 2)
plot(tokenContext, "Latest token context", color = color.purple)
plot(tokenVolatility, "Latest token volatility", color = color.orange)
hline(0.0, "Zero", color = color.new(color.gray, 70))
In this example, each input row has 4 features. `headCount` is 2, so the model dimension is split into two attention heads.
Attention rule: `modelDim` must be divisible by `headCount`, and the current implementation supports up to 8 heads.
🔸 Prioritized Experience Replay
Prioritized Experience Replay stores examples with priorities, then returns reproducible weighted samples. This is especially useful for Reinforcement Learning experiments where high-error transitions should be revisited more often.
//@version=6
indicator("NeuraLib Models PER Example", overlay = false, calc_bars_count = 1200)
import Alien_Algorithms/NeuraLib/1 as nl
import Alien_Algorithms/NeuraLib_Models/1 as models
var models.PrioritizedReplayBuffer replay = models.prioritizedReplayBuffer(4, 2, 300, "replay")
var nl.Sequential replayModel = nl.sequential("replay_q_model")
var float replayLoss = na
var float firstImportanceWeight = na
var float replayRows = na
if barstate.isfirst
nl.CompileConfig cfg = nl.compileConfig()
cfg := cfg
.presetQValues()
.optimizer(nl.adamW(0.001))
.trainEveryCall()
replayModel := replayModel
.input(array.from(4), "state")
.dense(8, nl.ActivationKind.relu, "hidden")
.qHead(2, nl.ActivationKind.linear, "q_values")
.compile(cfg)
float rsiValue = ta.rsi(close, 14)
float emaValue = ta.ema(close, 21)
float atrValue = ta.atr(14)
float atrPct = close == 0.0 ? 0.0 : atrValue / close
float momentum = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float nextReturn = na(close ) ? 0.0 : nl.nextReturnValue(close , close)
bool rowReady = not na(rsiValue ) and not na(emaValue ) and not na(atrPct ) and not na(momentum )
if rowReady
float prevEma = emaValue
float priceVsEma = prevEma == 0.0 ? 0.0 : close / prevEma - 1.0
array stateFeatures = array.from(rsiValue / 100.0, priceVsEma, atrPct , momentum )
array targetValues = array.from(math.max(-nextReturn, 0.0), math.max(nextReturn, 0.0))
float priority = math.abs(nextReturn) + 0.0001
replay := replay.pushExperience(stateFeatures, targetValues, priority)
replayRows := float(replay.size())
if replay.ready(32)
models.PrioritizedReplaySample sample = replay.sampleBatch(32, 0.6, 0.4, 17)
replayModel := replayModel.trainOnBatch(sample.batch.inputTensor, sample.batch.targetTensor)
replayLoss := replayModel.trainStats.lastLoss
firstImportanceWeight := sample.weightArray.size() > 0 ? sample.weightArray.get(0) : na
if sample.indexArray.size() > 0
replay := replay.updatePriority(sample.indexArray.get(0), replayLoss + 0.0001)
plot(replayLoss, "Replay training loss", color = color.orange, linewidth = 2)
plot(firstImportanceWeight, "First sample weight", color = color.aqua)
The returned sample includes:
batch : A normal NeuraLib `Batch` containing sampled inputs and targets.
indexArray : Logical replay indices that can be passed back to `updatePriority()`.
weightArray : Normalized importance weights for custom loss weighting or diagnostics.
sampleRows : Number of sampled rows.
PER sampling is deterministic for a given buffer, `batchSize`, and `seed`. That makes Pine tests and live research easier to reproduce.
----------------------------------------------------------------------------------------------------------------
🔷 MODEL FAMILIES
🔸 Residual Dense Blocks
`residualDense()` adds a feedforward residual block. Residual paths help preserve information through deeper models and reduce the chance that a dense stack destroys useful features too early.
🔸 Conv1D And Temporal Convolution Stacks
`conv1d()` and `temporalConvStack()` operate on flattened sequence inputs. A sequence with `timeSteps = 4` and `featureCount = 2` is represented as 8 input features. These blocks are useful for local temporal structure, short rolling windows, feature rhythm, and compact pattern extraction.
🔸 Global Pooling
`globalAvgPool1d()` and `globalMaxPool1d()` compress flattened sequence outputs into feature-level summaries. Average pooling captures broad sequence behavior, while max pooling emphasizes the strongest activation per feature.
🔸 LSTM And GRU Blocks
`lstm()` and `gru()` provide recurrent sequence memory over flattened time-series inputs. They are useful when the order of recent states matters more than a single snapshot.
🔸 Attention And Transformers
`selfAttention()`, `multiHeadSelfAttention()`, `crossAttention()`, `transformerEncoder()`, `transformerEncoderStack()`, and `transformerDecoder()` bring attention-style modeling into Pine. They are designed for compact token matrices, packed target-memory layouts, and small Transformer-style research models that fit TradingView limits.
🔸 Q-Heads And Dueling Q-Heads
`qHeadBlock()` creates action-value style outputs. `duelingQHead()` splits the model into value and advantage branches, then recombines them into Q-values. This is useful when you want the model to estimate both the overall state value and the relative value of each action.
🔸 Positional Encoding
`pushPositionalEncoding()` adds sinusoidal position features to a NeuraLib `FeatureBuilder`. This helps attention-style models distinguish where a token or time step sits in a sequence.
----------------------------------------------------------------------------------------------------------------
🔷 FEATURE QUICK REFERENCE
Built on NeuraLib : Uses the main NeuraLib graph, tensor, training, optimizer, dataset, and inference runtime.
Fluent API : Adds methods directly to NeuraLib `Sequential` models after import.
Block factories : Provides standalone `GraphBlock` factories for users who want lower-level composition.
Temporal modeling : Conv1D, temporal convolution stacks, and 1D pooling.
Recurrent modeling : LSTM and GRU sequence blocks.
Attention modeling : Self-attention, multi-head self-attention, cross-attention, encoders, encoder stacks, and decoders.
Reinforcement Learning support : Q-heads, dueling Q-heads, target-model soft updates through NeuraLib, and Prioritized Experience Replay.
Reproducible replay : PER sampling is deterministic for a given seed.
Shape guardrails : Advanced builders validate expected model feature counts and attention head compatibility.
----------------------------------------------------------------------------------------------------------------
🔷 IMPORTANT USAGE NOTES
Import order matters : Import `NeuraLib` first, then `NeuraLib_Models`.
The alias can be unused : The imported expansion registers methods on NeuraLib types, so `.lstm()`, `.gru()`, `.transformerEncoder()`, and similar methods can be called in the model chain.
Keep models compact : Pine Script has execution limits. Start with small hidden sizes, short sequences, and low head counts.
Control chart history : Use `calc_bars_count = 600` in `indicator()` when needed to balance available training history against model size and execution time.
Respect sequence shapes : Conv1D, temporal stacks, LSTM, and GRU methods expect flattened sequence sizes of `timeSteps * featureCount`.
Respect attention shapes : Attention methods expect each input row to have `modelDim` columns. Cross-attention and decoder blocks use packed rows.
Use NeuraLib guardrails : Train/validation splits, scalers, EarlyStopper, training gates, and gradient clipping remain part of the main NeuraLib workflow.
----------------------------------------------------------------------------------------------------------------
🔷 API REFERENCE
🔸 Sequential Methods
residualDense(hiddenUnits, activationKind, dropoutRate, name) : Adds a residual dense block.
duelingQHead(hiddenUnits, actionCount, activationKind, name) : Adds a dueling value/advantage Q-head.
conv1d(timeSteps, featureCount, filters, kernelSize, stride, activationKind, name) : Adds a Conv1D block for flattened sequences.
temporalConvStack(timeSteps, featureCount, filters, kernelSize, layers, stride, activationKind, dropoutRate, name) : Adds stacked temporal Conv1D layers.
globalAvgPool1d(timeSteps, featureCount, name) : Adds global average pooling over a flattened 1D sequence.
globalMaxPool1d(timeSteps, featureCount, name) : Adds global max pooling over a flattened 1D sequence.
lstm(timeSteps, featureCount, units, activationKind, name) : Adds an LSTM scan block.
gru(timeSteps, featureCount, units, activationKind, name) : Adds a GRU scan block.
selfAttention(modelDim, causal, name) : Adds row-wise self-attention.
multiHeadSelfAttention(modelDim, headCount, causal, name) : Adds multi-head self-attention.
crossAttention(queryRows, memoryRows, modelDim, headCount, name) : Adds packed query-memory cross-attention.
transformerEncoder(modelDim, causal, ffMultiplier, activationKind, name, dropoutRate, headCount) : Adds one Transformer encoder block.
transformerEncoderStack(modelDim, layers, causal, ffMultiplier, activationKind, dropoutRate, headCount, name) : Adds repeated Transformer encoder blocks.
transformerDecoder(targetRows, memoryRows, modelDim, headCount, ffMultiplier, activationKind, dropoutRate, name) : Adds a packed target-memory Transformer decoder.
🔸 GraphBlock Factories
qHeadBlock(inputFeatures, actionCount, activationKind, name) : Creates a Q-head block.
duelingQHeadBlock(inputFeatures, hiddenUnits, actionCount, activationKind, name) : Creates a dueling Q-head block.
residualDenseBlock(inputFeatures, hiddenUnits, activationKind, dropoutRate, name) : Creates a residual dense block.
conv1dBlock(timeSteps, featureCount, filters, kernelSize, stride, activationKind, name) : Creates a Conv1D block.
temporalConvStackBlock(timeSteps, featureCount, filters, kernelSize, layers, stride, activationKind, dropoutRate, name) : Creates a temporal convolution stack.
globalAvgPool1dBlock(timeSteps, featureCount, name) and globalMaxPool1dBlock(timeSteps, featureCount, name) : Create pooling blocks.
lstmBlock(timeSteps, featureCount, units, activationKind, name) and gruBlock(timeSteps, featureCount, units, activationKind, name) : Create recurrent blocks.
selfAttentionBlock(modelDim, causal, name) , multiHeadSelfAttentionBlock(modelDim, headCount, causal, name) , and crossAttentionBlock(queryRows, memoryRows, modelDim, headCount, name) : Create attention blocks.
transformerEncoderBlock(modelDim, causal, ffMultiplier, activationKind, name, dropoutRate, headCount) and transformerDecoderBlock(targetRows, memoryRows, modelDim, headCount, ffMultiplier, activationKind, dropoutRate, name) : Create Transformer blocks.
🔸 Prioritized Experience Replay
prioritizedReplayBuffer(featureCount, targetCount, maxRows, name) : Creates a replay buffer.
pushExperience(featureRowArray, targetRowArray, priority) : Adds or overwrites one replay row.
sampleBatch(batchSize, alpha, beta, seed) : Returns a deterministic weighted sample.
updatePriority(index, priority) : Updates a sampled row priority.
toBatch() : Returns all replay rows in chronological order.
ready(minRows) , size() , and clear() : Replay buffer utilities.
🔸 Feature Helpers
pushPositionalEncoding(position, dimensions, maxPeriod, featurePrefix) : Appends sinusoidal positional encoding values to a NeuraLib `FeatureBuilder`.
NeuraLib_Models is for Pine Script developers who want higher-level neural architecture blocks without leaving the NeuraLib runtime. It is built for compact research models inside TradingView's execution limits, not for oversized GPU-style networks.
All the diagrams in this publication are rendered natively on TradingView using Pine3D
----------------------------------------------------------------------------------------------------------------
This work is licensed under (CC BY-NC-SA 4.0) , meaning usage is free for non-commercial purposes given that Alien_Algorithms is credited in the description for the underlying software. For commercial use licensing, contact Alien_Algorithms
Pine Script® library
NeuraLib: A Native AI and Deep Learning RuntimeNeuraLib is a tensor-based, auto-differentiating Machine Learning runtime built natively for Pine Script™.
It brings real Deep Learning mechanisms that power modern Artificial Intelligence systems into TradingView. Instead of relying on fixed formulas, static regressions, or rigid structures, NeuraLib gives Pine developers a different tool: a compact neural runtime that can learn from the features you feed it, using the architecture you define.
This means users are no longer limited to classical methods like Linear Regression, Logistic Regression, KNN, Naive Bayes, Kalman Filters, or Markov Chains. One can build adaptive architectures perfectly suited for custom indicators, strategies, regime detection, directional prediction, price transforms, and AI-assisted signal generation.
Using NeuraLib, one can build a model, collect market data, normalize it, run predictions, train through backpropagation, track validation behavior, and update weights directly inside TradingView.
Furthermore, it is not necessary to directly display trained variables. The process can be a part of a larger script functionality, where AI-powered decision making changes how an indicator behaves.
The goal is to make real neural network workflows usable in Pine Script without hiding the important controls, being scalable with evolving market dynamics, and abstracting away the complexity that comes with such software. The provided API is highly modular and intuitive, using chained object-oriented programming for easy readability and use. The backend is engineered with fault-tolerance in mind, providing users with sanity checks and preventing common pitfalls by default.
Think of NeuraLib as a comprehensive machine learning ecosystem, containing:
A Model Builder : Define neural networks with readable chained calls like `.input()`, `.dense()`, and `.dropout()`.
An In-Pine Training Engine : Models calculate losses, backpropagate gradients, update weights, and produce predictions directly on chart data.
Automated Data Pipelines : Built-in datasets handle feature collection, robust scaling (Z-Score, Min-Max), validation holdout splits, and time-series rolling windows.
Finance-Native Loss Functions : Beyond standard error metrics, the engine includes Directional, Quantile, Multi-Horizon Weighted, and Sharpe-style losses tailored for trading.
Practical Training Controls : Layer Normalization, AdamW weight decay, gradient clipping, gradient accumulation, and early stopping are built in to prevent overfitting.
Advanced Optimizers : Train networks using RMSProp, Adam, or AdamW, paired with learning rate schedules like Warmup Cosine and Step Decay.
For newer users, this means you can start with a simple dense model. For advanced users, the same runtime exposes graph operations, custom blocks, tensors, matrix operations, optimizers, schedules, losses, and extension hooks.
In plain terms, a model receives a row of numbers called features, compares its output against a target, measures the error with a loss function, and then adjusts its internal weights to reduce that error next time.
----------------------------------------------------------------------------------------------------------------
🔷 WHAT MAKES IT DIFFERENT
🔸 Parity-tested neural math
NeuraLib’s core operations have been tested against established Machine Learning Runtimes outside of TradingView (Such as Keras / TensorFlow / PyTorch).
The goal was not to imitate the appearance of Machine Learning, but to reproduce the math that is proven to work. Standard forward passes, gradients, losses, and optimizer behavior were checked for 1:1 algorithmic parity, with negligible differences coming from normal floating-point behavior.
That means the matrix math, backpropagation, and gradient updates running on your chart follow the same underlying logic expected from professional Machine Learning environments.
🔸 Matrix-first computation
NeuraLib uses tensor and matrix abstractions as the foundation of the runtime. Under the hood, it supports the operations needed for neural computation, including matrix multiplication, broadcasting, activation functions, softmax, slicing, concatenation, reductions, normalization, attention scoring, convolution-style operations, and recurrent scan blocks.
🔸 Auto-differentiating graph engine
NeuraLib makes the computational graph a first-class object.
You can use high-level Sequential models, or build custom GraphBlocks from lower-level operations. Once a custom block is connected to a model, the same runtime handles the backward pass. That means your custom architecture can be trained with the same `.trainOnBatch()` workflow as standard layers.
----------------------------------------------------------------------------------------------------------------
🔷 CUSTOM GRAPHS
The Sequential API is the easiest way to start, but NeuraLib is not just a list of built-in layers.
You can create a `GraphBlock`, add operations, set an output node, and plug that block into a model. Once connected, the runtime handles the backward pass and parameter updates.
Useful graph operations include:
Matrix multiplication, transpose, add, subtract, multiply, divide, and scale.
Activation functions and softmax.
Layer Normalization and Dropout.
Causal masking, slicing, concatenation, row reduction, and column reduction.
Global average pooling and global max pooling for 1D sequences.
Attention score and attention apply operations.
Conv1D, LSTM scan, and GRU scan primitives.
This is the foundation that allows companion model libraries to add advanced AI and Machine Learning architectures without changing the main NeuraLib runtime.
----------------------------------------------------------------------------------------------------------------
🔷 BUILT-IN DATA GUARDRAILS
NeuraLib is not only a training mechanism. It also includes guardrails for cleaner research:
Invalid rows are rejected : Dataset rows must match the configured feature and target counts, and rows containing `na` values are not inserted.
Shape checks protect model calls : Forward, training, backward, and evaluation paths validate input and target shapes before running expensive graph code.
Train and validation splits are separated : `trainBatch()` and `validationBatch()` use holdout rows instead of blending all rows into one batch.
Scaler leakage is controlled : Validation batches are scaled from the training-side profile where the dataset split requires it, so validation normalization does not learn from the holdout slice.
Rolling windows respect time order : `RollingDataset` supports target offsets and wrapped ring buffers while preserving chronological reads.
These checks help reduce common data poisoning and data leakage mistakes: wrong row widths, missing values, validation contamination, target-offset leakage, and accidental overtraining across every historical bar.
----------------------------------------------------------------------------------------------------------------
🔷 A FIRST MODEL
The basic API is intentionally readable. This creates a small model with dropout, one hidden layer, Huber loss, AdamW optimization, and MAE tracking.
//@version=6
indicator("NeuraLib Basic Model", overlay = false, calc_bars_count = 600)
import Alien_Algorithms/NeuraLib/1 as nl
var nl.Sequential model = nl.sequential("basic_model")
var float modelOutput = na
if barstate.isfirst
nl.CompileConfig cfg = nl.compileConfig()
cfg := cfg
.optimizer(nl.adamW(0.001))
.loss(nl.LossKind.huber)
.metric(nl.MetricKind.mae)
.withTrainingGate(true)
model := model
.input(array.from(4), "features")
.dropout(0.15)
.dense(8, nl.ActivationKind.relu, "hidden")
.dense(1, nl.ActivationKind.linear, "output")
.compile(cfg)
float rsiValue = ta.rsi(close, 14)
float emaValue = ta.ema(close, 21)
float atrValue = ta.atr(14)
float atrPct = close == 0.0 ? 0.0 : atrValue / close
float momentum = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
bool ready = not na(rsiValue) and not na(emaValue) and not na(atrPct) and not na(momentum)
if ready
float priceVsEma = emaValue == 0.0 ? 0.0 : close / emaValue - 1.0
nl.Tensor inputTensor = nl.vector(array.from(rsiValue, priceVsEma, atrPct, momentum), "features")
nl.Tensor outputTensor = model.predict(inputTensor)
modelOutput := outputTensor.get1d(0)
plot(modelOutput, "Untrained model output", color = color.aqua, linewidth = 2)
hline(0.0, "Zero", color = color.new(color.gray, 70))
The same model can then receive scaled batches from a dataset and train with `.trainOnBatch()`. The plot in this first example is the untrained forward output, included so the block can be pasted directly into an indicator.
----------------------------------------------------------------------------------------------------------------
🔷 A PRACTICAL DATA FLOW
Machine Learning models usually fail when the data pipeline is careless. Price, volume, volatility, and oscillators often live on very different scales. NeuraLib includes dataset and scaling helpers so the common workflow stays explicit:
Build a feature row.
Build a target row.
Push the row into a dataset.
Request a training batch.
Request a validation batch when needed.
Train, evaluate, predict, and inverse-scale targets when appropriate.
//@version=6
indicator("NeuraLib Return Validation Example", overlay = false, calc_bars_count = 600)
import Alien_Algorithms/NeuraLib/1 as nl
var nl.Sequential model = nl.sequential("returns_model")
var nl.WindowDataset dataset = nl.windowDataset(4, 1, 500, "returns_dataset")
var float predictedReturn = na
var float validationLossValue = na
var float trainingLossValue = na
if barstate.isfirst
nl.CompileConfig cfg = nl.compileConfig()
cfg := cfg
.optimizer(nl.adamW(0.003))
.loss(nl.LossKind.huber)
.metric(nl.MetricKind.mae)
.trainEveryCall()
model := model
.input(array.from(4), "features")
.dense(8, nl.ActivationKind.relu, "hidden")
.dropout(0.10, "dropout")
.dense(1, nl.ActivationKind.linear, "next_return")
.compile(cfg)
dataset := dataset
.setInputScaler(nl.ScalerKind.zScore)
.setTargetScaler(nl.ScalerKind.zScore)
float rsiValue = ta.rsi(close, 14)
float emaValue = ta.ema(close, 21)
float atrValue = ta.atr(14)
float atrPct = close == 0.0 ? 0.0 : atrValue / close
float momentum = na(close ) or close == 0.0 ? 0.0 : close / close - 1.0
float realizedReturn = na(close ) ? na : nl.nextReturnValue(close , close)
bool rowReady = not na(rsiValue ) and not na(emaValue ) and not na(atrPct ) and not na(momentum ) and not na(close )
if rowReady
float prevEma = emaValue
float priceVsEma = prevEma == 0.0 ? 0.0 : close / prevEma - 1.0
array features = array.from(
rsiValue ,
priceVsEma,
atrPct ,
momentum )
array target = array.from(nl.nextReturnValue(close , close))
dataset := dataset.pushRow(features, target)
if dataset.ready(64)
nl.Batch train = dataset.trainBatch(16)
nl.Batch validation = dataset.validationBatch(16)
model := model.trainOnBatch(train.inputTensor, train.targetTensor)
trainingLossValue := model.trainStats.lastLoss
nl.LossResult validationLoss = model.evaluate(validation.inputTensor, validation.targetTensor)
validationLossValue := validationLoss.value
bool liveReady = not na(rsiValue) and not na(emaValue) and not na(atrPct) and not na(momentum)
if liveReady
float livePriceVsEma = emaValue == 0.0 ? 0.0 : close / emaValue - 1.0
array liveFeatures = array.from(rsiValue, livePriceVsEma, atrPct, momentum)
nl.Tensor liveInput = nl.vector(liveFeatures, "live_features")
nl.Tensor scaledInput = dataset.scaleInput(liveInput)
nl.Tensor scaledPrediction = model.predict(scaledInput)
nl.Tensor rawPrediction = dataset.inverseScaleTarget(scaledPrediction)
predictedReturn := rawPrediction.get1d(0)
plot(realizedReturn, "Last realized return", color = color.gray)
plot(predictedReturn, "Predicted next return", color = color.aqua, linewidth = 2)
plot(validationLossValue, "Validation loss", color = color.orange)
plot(trainingLossValue, "Training loss", color = color.new(color.blue, 35))
hline(0.0, "Zero", color = color.new(color.gray, 70))
This example trains from completed historical pairs. The feature row comes from the previous bar, and the target is the return from that previous bar to the current bar. That keeps the example easy to inspect and avoids using future information in the feature row. When pasted into an indicator, it plots the last realized return, the model's predicted next return, training loss, and validation loss.
----------------------------------------------------------------------------------------------------------------
🔷 TWO PRACTICAL EXECUTION MODES
Deep Learning in Pine requires careful execution control. NeuraLib supports two main workflows.
🔸 1. Live-edge training
Use this when you want safer execution for larger models.
The dataset can collect rows across the chart, while the expensive training step only runs on the last confirmed historical bar. This helps avoid timeouts while still allowing the model to learn from recent prepared data.
cfg := cfg.withTrainingGate(true)
Use this for:
Larger models
More features
Rolling sequence inputs
Heavier architectures
Safer live-edge updates
🔸 2. Full-history training and inference
Use this when the model is intentionally small.
The model can train and infer across historical bars, which makes it possible to create lightweight adaptive indicators, such as an AI Moving Average that learns from recent local structure instead of using a fixed smoothing formula.
cfg := cfg.trainEveryCall()
Use this for:
Tiny dense models
Small batches
Fast adaptive filters
AI-assisted moving averages
Lightweight feature transforms
For full-history workflows, start small. A shallow model with 4 to 8 hidden units and a batch size of 8 or 16 is usually a better starting point than a deep architecture.
----------------------------------------------------------------------------------------------------------------
🔷 ADVANCED MODEL EXPANSION
NeuraLib is designed to act as the foundation for larger model libraries and community-built extensions.
To demonstrate this, NeuraLib Expansion: Advanced Model Layers is built entirely on top of the public NeuraLib API and is launched in parallel on day one. The expansion library is published as NeuraLib_Models . It extends the runtime with higher-level builders for LSTMs, GRUs, temporal convolution stacks, residual dense blocks, dueling Q-heads for Reinforcement Learning, Transformer-style attention blocks, and Prioritized Experience Replay utilities.
The important part is architectural: advanced models plug into the same runtime. NeuraLib remains the foundation for tensors, graph execution, optimization, training, inference, datasets, and scaling. After importing `NeuraLib_Models`, its fluent methods become available on NeuraLib `Sequential` models, so the expansion alias does not need to be referenced directly in the layer chain.
//@version=6
indicator("NeuraLib Models Extension Demo", overlay = false, calc_bars_count = 600)
import Alien_Algorithms/NeuraLib/1 as nl
import Alien_Algorithms/NeuraLib_Models/1 as models
var nl.Sequential model = nl.sequential("advanced_demo")
if barstate.isfirst
model := model
.input(array.from(8), "sequence")
.temporalConvStack(4, 2, 3, 2, 2, 1, nl.ActivationKind.relu, 0.0, "temporal")
.globalAvgPool1d(2, 3, "pool")
.duelingQHead(4, 2, nl.ActivationKind.relu, "q_head")
.build(nl.rng(7))
----------------------------------------------------------------------------------------------------------------
🔷 FEATURE QUICK REFERENCE
Runtime : Matrix-first auto-differentiating neural graph runtime for Pine Script.
Model API : Chainable `Sequential` builder with `input`, `dense`, `dropout`, `layerNorm`, `activation`, `flatten`, `reshape`, and custom `block` support.
Training : Forward pass, loss calculation, backpropagation, gradient accumulation, optimizer steps, train stats, and history buffers.
Inference : `.predict()` for deterministic inference and `.predictMC()` for dropout-based uncertainty sampling.
Datasets : `WindowDataset` for flat rows and `RollingDataset` for time-series windows.
Scaling : None, Z-Score, Min-Max, Running Z-Score scalers, dataset input scaling, target scaling, and inverse target scaling.
Optimizers : SGD, Momentum, RMSProp, Adam, and AdamW.
Schedulers : Constant, Step Decay, Cosine Decay, and Warmup Cosine.
Activations : Linear, ReLU, Leaky ReLU, ELU, GELU Approx, Sigmoid, Tanh, Softplus, Swish, and Softmax.
Losses : MSE, MAE, Huber, LogCosh, Binary Cross Entropy, Binary Cross Entropy From Logits, Categorical Cross Entropy, Softmax Cross Entropy From Logits, Directional, Quantile, Multi-Horizon Weighted, and Sharpe.
Metrics : MAE, RMSE, Directional Accuracy, Binary Accuracy, Binary Accuracy From Logits, Categorical Accuracy, and Cosine Similarity.
Guardrails : Shape validation, invalid-row rejection, train/validation split helpers, leakage-aware scaler profiles, training gates, gradient clipping, and EarlyStopper.
Advanced expansion : Conv1D, temporal stacks, recurrent blocks, attention, Transformers, dueling Q-heads, positional encodings, and Prioritized Experience Replay.
----------------------------------------------------------------------------------------------------------------
🔷 IMPORTANT CONSIDERATIONS
Start small : Pine Script is not a GPU training environment. Compact models are the right starting point.
Control chart history : Use `calc_bars_count = 600` in `indicator()` when needed to balance available training history against model size and execution time.
Use the training gate : For heavier models, use `.withTrainingGate(true)` so backpropagation runs only at the confirmed historical edge.
Scale your inputs : Raw market features often differ by orders of magnitude. Use dataset scalers unless you have a deliberate reason not to.
Validate separately : Use `trainBatch()` and `validationBatch()` to monitor generalization instead of only watching training loss.
Avoid lookahead : Build feature rows only from information available at the time of the row. Use completed target rows for training.
Treat outputs as research signals : NeuraLib provides model mechanics. Strategy design, risk management, and market assumptions remain the user's responsibility.
----------------------------------------------------------------------------------------------------------------
🔷 API REFERENCE
🔸 Model Setup
sequential(name) : Creates an empty `Sequential` model.
compileConfig() : Creates a model configuration object.
build(rng) : Builds model parameters with a deterministic random stream.
compile(config) : Builds the model when needed and applies the training configuration.
rng(seed, streamId) : Creates a deterministic random stream.
🔸 Sequential Methods
input(dimsArray, name) : Defines the input shape.
dense(units, activation, name) : Adds a fully connected layer.
qHead(actionCount, activation, name) : Adds a Q-value output head.
activation(activationKind, alpha, name) : Adds an activation block.
dropout(rate, name) : Adds dropout regularization.
layerNorm(name) : Adds layer normalization.
flatten(name) and reshape(outputDimsArray, name) : Adjust model shape metadata.
block(graphBlock) : Adds a custom `GraphBlock`.
trainOnBatch(inputTensor, targetTensor) : Runs training when the active gate allows it.
backward(targetTensor) : Accumulates gradients from the last forward pass without stepping.
step() : Applies the optimizer step to accumulated gradients.
predict(inputTensor) : Runs inference.
predictMC(inputTensor, samples) : Runs dropout-enabled Monte Carlo prediction and returns mean and variance.
evaluate(inputTensor, targetTensor) : Calculates loss without updating weights.
fitDataset(dataset) and fitRollingDataset(dataset, targetOffset) : Train through dataset adapters.
getWeightsArray() and setWeightsArray(weightsArray) : Export and import flat model weights.
softUpdateFrom(sourceModel, tau) : Soft-update parameters from another model.
🔸 CompileConfig Methods
optimizer(optimizerState) : Sets the optimizer.
schedule(scheduleState) : Sets the learning-rate schedule.
loss(lossKind) : Sets the training loss.
reduction(reductionKind) : Sets loss reduction behavior.
metric(metricKind) : Adds a metric.
batchSize(size) , epochsPerBar(count) , evalStride(stride) , and historyLength(length) : Store batch and cadence preferences, and set the metric history length.
clipNorm(value) and clipValue(value) : Apply gradient clipping.
gradAccumSteps(steps) : Accumulates gradients before stepping.
withTrainingGate(enabled) : Restricts training to the last confirmed historical bar when enabled.
trainEveryCall() : Allows training whenever `.trainOnBatch()` is called.
presetPriceRegression() , presetReturnRegression() , presetBinaryDirection() , presetBinaryDirectionLogits() , presetQValues() , and presetSharpe() : Apply common loss and metric presets.
🔸 Datasets
windowDataset(featureCount, targetCount, maxRows, name) : Stores flat feature and target rows.
rollingDataset(timeSteps, featureCount, targetCount, maxRows, name) : Stores time-series windows.
pushRow(featureArray, targetArray) : Adds one validated row.
pushBuilderRow(featureBuilder, targetArray) : Adds a row from a `FeatureBuilder`.
pushNextReturnRow(featureBuilder, currentValue, futureValue) : Adds a next-return target.
pushNextDirectionRow(featureBuilder, currentValue, futureValue, threshold, zeroOne) : Adds a direction target.
ready(minRows or minWindows, targetOffset) and size() : Check dataset readiness.
lastBatch(batchSize) : Returns the most recent scaled rows from a `WindowDataset`.
toBatch() : Returns all rows from a `WindowDataset`.
unrollBatch(targetOffset) : Returns all rolling windows from a `RollingDataset`.
trainBatch(validationRows or validationWindows, targetOffset) : Returns the training side of the split.
validationBatch(validationRows or validationWindows, targetOffset) : Returns the validation side of the split.
setInputScaler(kind) , setTargetScaler(kind) , scaleInput(tensor) , scaleTarget(tensor) , and inverseScaleTarget(tensor) : Configure and apply scaling.
clear() : Clears stored rows.
🔸 Tensor, Matrix, and Feature Helpers
scalar(value) , vector(valuesArray) , matrix2d(rows, cols, fillValue) , zeros(shape) , ones(shape) , and full(shape, fillValue) : Create tensors.
shapeFromDims(dimsArray) : Creates a shape.
matrixTensor(tensor) , matrixTensor2d(rows, cols, fillValue) , and matrixTensorFromMatrix(sourceMatrix) : Create matrix tensors.
reshape(dimsArray) , flatten() , row(rowIndex) , get1d(index) , sum() , mean() , variance() , normL2() , argmax() , and dot(other) : Tensor methods.
matmul() , transpose() , add() , subtract() , multiply() , divide() , scale() , activate() , softmax() , sliceRows() , sliceCols() , concatRows() , concatCols() , globalAvgPool1d() , and globalMaxPool1d() : MatrixTensor methods.
featureBuilder(name) , push(value, featureName) , addFeature(value, featureName) , toTensor(tensorName) , toArray() , size() , and clear() : Feature row helpers.
🔸 Scalers, Optimizers, and Schedules
zScoreScaler() , minMaxScaler() , runningZScoreScaler() , and noneScaler() : Standalone scaler states.
fit(tensor) , partialFit(tensor) , transform(tensor) , and inverseTransform(tensor) : Scaler methods.
sgd(learningRate) , momentum(learningRate, momentum) , rmsprop(learningRate, rho, epsilon) , adam(learningRate, beta1, beta2, epsilon) , and adamW(learningRate, beta1, beta2, epsilon, weightDecay) : Optimizers.
constantSchedule(learningRate) , stepDecay(baseLearningRate, decaySteps, gamma) , cosineDecay(baseLearningRate, minLearningRate, decaySteps) , and warmupCosine(baseLearningRate, minLearningRate, warmupSteps, decaySteps) : Schedules.
currentRate(stepCount) : Reads a schedule's learning rate at a step.
paramBank() , append() , zeroGrad() , globalGradNorm() , step(optimizerState) , and softUpdateFrom(sourceBank, tau) : Low-level parameter bank utilities.
🔸 Losses and Metrics
mse() , mae() , huber() , logCosh() , binaryCrossEntropy() , binaryCrossEntropyFromLogits() , categoricalCrossEntropy() , softmaxCrossEntropyFromLogits() , directionalLoss() , quantileLoss() , multiHorizonWeighted() , and sharpeLoss() : Direct loss helpers.
metricValue(metricKind, predictionTensor, targetTensor) : Direct metric helper.
earlyStopper(patience, minDelta) , update(validationLoss) , and reset() : Validation stopping helper.
nextReturnValue(currentValue, futureValue) and nextDirectionValue(currentValue, futureValue, threshold, zeroOne) : Common target helpers.
🔸 GraphBlock Operations
graphBlock(name) : Creates a custom trainable graph block.
input() , param() , constScalar() , constMatrix() , and output() : Define graph inputs, parameters, constants, and output metadata.
matmul() , add() , subtract() , multiply() , divide() , scale() , activate() , softmax() , transpose() , layerNorm() , and dropout() : NeuraLib graph math.
causalMask() , sliceRows() , concatRows() , sliceCols() , concatCols() , reduceRows() , and reduceCols() : Structural graph operations.
globalAvgPool1d() , globalMaxPool1d() , attentionScore() , attentionApply() , conv1d() , scanLstm() , and scanGru() : Sequence and architecture primitives.
🔸 NeuraLib_Models API
prioritizedReplayBuffer(featureCount, targetCount, maxRows, name) : Creates a replay buffer.
pushExperience(featureRowArray, targetRowArray, priority) , sampleBatch(batchSize, alpha, beta, seed) , updatePriority(index, priority) , toBatch() , ready(minRows) , size() , and clear() : Prioritized Experience Replay helpers.
pushPositionalEncoding(position, dimensions, maxPeriod, featurePrefix) : Adds positional encoding values to a `FeatureBuilder`.
residualDense() , duelingQHead() , conv1d() , temporalConvStack() , globalAvgPool1d() , globalMaxPool1d() , lstm() , gru() , selfAttention() , multiHeadSelfAttention() , crossAttention() , transformerEncoder() , transformerEncoderStack() , and transformerDecoder() : NeuraLib_Models `Sequential` methods.
NeuraLib is for Pine Script developers who want to move beyond fixed formulas and experiment with real neural network workflows directly inside TradingView. It is a research framework, not a guarantee of market performance. Use validation, avoid lookahead, control risk, and keep models small enough for Pine's execution limits.
All the diagrams in this publication are rendered natively on TradingView using Pine3D
----------------------------------------------------------------------------------------------------------------
This work is licensed under (CC BY-NC-SA 4.0) , meaning usage is free for non-commercial purposes given that Alien_Algorithms is credited in the description for the underlying software. For commercial use licensing, contact Alien_Algorithms
Pine Script® library
Price Relative PositionTo check the current price position relative to moving averages, 52wk high/low and intraday high/low
Pine Script® indicator
Indices Delta DashboardThe indicator calculates "Delta" as the difference between buying and selling volume (approximated by comparing the Close to the Open). It provides two key metrics for each ticker in a clean table format:
Bar Delta: The net buy/sell volume for the most recent completed bar.
Session Delta: The cumulative net buy/sell volume since the start of the current trading day.
Key Features:
Multi-Ticker Support: Monitors NQ (Nasdaq 100), ES (S&P 500), YM (Dow Jones), and RTY (Russell 2000) simultaneously.
Session Tracking: Automatically resets the "Session Delta" at the start of each new trading day.
Customization: You can adjust the tickers, table position, and colors in the script settings.
Visual Clarity: Bullish values are highlighted in green, and bearish values in red for quick interpretation..
What has changed:
Generic Symbols: I've changed the default symbols to NQ1!, ES1!, YM1!, and RTY1!. TradingView will now automatically look for the primary continuous futures contract available for these tickers.
Error Handling: I added the ignore_invalid_symbol = true parameter to the data requests. This prevents the script from stopping if one of the symbols isn't found.
Manual Selection: If a symbol still shows as "Invalid" in the table, you can now easily change it:
Open the indicator Settings (gear icon).
Click on the Ticker input field.
Use the search box to find and select the exact version of the symbol you use (e.g., if you use Micro Futures, search for MNQ1!).
What's New:
Text Size Control: I added a "Text Size" input in the settings (Style group). You can now choose between Tiny, Small, Normal, and Large to adjust the overall size of the table.
Extended Positions: The "Table Position" setting now explicitly includes all four corners: Top Right, Top Left, Bottom Right, and Bottom Left.
Cleaner Ticker Display: I implemented a cleaning function that:
Removes the exchange prefix (e.g., CME:NQ1! becomes NQ1!).
Specifically removes keywords like MINI or MICRO from the displayed text to keep the table compact.
The table will now look much cleaner and can be positioned anywhere on your chart to avoid overlapping with price action or other indicators.
Key Additions:
Table Timeframe: You can now set a specific timeframe for the table (e.g., 5m) independently of your chart timeframe (e.g., 1m). The "Bar Delta" will reflect the volume of the most recent 5-minute bar, and the "Session Delta" will accumulate those 5-minute bars.
Session Type (Globex):
Daily (All): Accumulates delta for the entire trading day (resetting at the daily open).
Globex (18:00-09:30 EST): Accumulates delta specifically during the overnight session from 6:00 PM to 9:30 AM EST.
The script uses the America/New_York timezone to ensure it handles Daylight Savings Time changes automatically.
Dynamic Headers: The table header now displays the current calculation timeframe in parentheses (e.g., "Ticker (5)") so you always know what data you're looking at.
How to use:
Open the indicator Settings.
Go to the Timeframe & Session group.
Adjust the Table Timeframe to your preference.
Switch Session Type to "Globex" to see only the overnight delta.
CREATED BY luxAlgo Quant
Pine Script® indicator
ZCT USD VolumeVolume indicator showing volume in USD.
unlike other volume indicators, this displays volume in USD so you can scan for the dollar amount needed. Whether this be a minimum volume threshold, fast spikes, or any other trends in volume you're looking for.
Pine Script® indicator
XAUUSD PDH/PDL EMA Trend Scalper Strategy XAUUSD PDH/PDL Trend Scalper is a rule-based breakout strategy designed for XAUUSD.
The strategy uses the previous trading session’s high and low to create Fibonacci-based breakout levels for the next session.
Core logic:
- Buy entry: Previous Day High / Fib 1.0 breakout
- Buy TP: Fib 1.1
- Buy SL: Fib 0.9
- Sell entry: Previous Day Low / Fib 0.0 breakout
- Sell TP: Fib -0.1
- Sell SL: Fib 0.1
- Fixed Risk/Reward: 1:1
Trend score filter:
- Long trades require a 3/3 EMA score
- Short trades require at least 2/3 EMA score
- EMA score is based on:
- D1 20 EMA
- H1 100 EMA
- M15 50 EMA
- Score day is calculated from 01:00 to 01:00 Turkey time.
Risk and session filters:
- 1% risk per trade based on a 100K account size
- No trading between 23:30 and 02:00 Turkey time
- First 60 minutes after session open are skipped
- No trade if stop distance is below $3
- Open positions are closed at 23:30 Turkey time
- Only one completed trade per day
Visual features:
- Previous day Fibonacci breakout levels
- Entry, TP and SL levels
- EMA score table
- No Trade background zone
- Stop distance labels
Backtest summary:
- Total trades: 284
- Win rate: 62.32%
- Net profit: +$70,977
- Profit factor: 1.67
- Max drawdown: around -$8,400
This script is for educational and backtesting purposes only. It is not financial advice. Always forward test before using any strategy in live markets.
Pine Script® strategy
Silver Bullet Zone - Tijuana (7AM-9AM)The indicator visually highlights the 7:00 AM to 9:00 AM (Tijuana time) time slot—equivalent to 10:00 AM–12:00 PM ET—directly on the TradingView chart.
* Semi-transparent yellow background during those two hours.
* Vertical dashed lines marking the exact start and end of the session.
* Text labels at the beginning of each zone for easy identification.
* Fully customizable via inputs: change colors, start times, and end times.
In practice, it provides a visual alert for when you are within the NY ICT Silver Bullet window for the US30.
Pine Script® indicator
Multi-TF Profiles & VWAPs [edge]A multi-pattern intraday execution indicator built for precision trading around high-probability zones.
This tool combines higher timeframe context with 2-minute execution logic to identify breakout, continuation, reversal, mean reversion, and failed breakout setups—all structured through a clean ABC grading system.
Instead of relying on a single trigger like engulfing candles, the indicator evaluates a stack of confirmations, including:
liquidity sweeps
reclaim and rejection behavior
acceptance above/below value
displacement (CSD)
imbalance (IFVG)
structural context relative to key zones
The result is a structured framework that distinguishes between:
early setup formation (awareness)
confirmed execution signals (precision)
Each signal is graded based on:
location quality
confirmation strength
structural alignment
risk/reward clarity
Designed for intraday traders, the core execution logic is optimized on the 2-minute timeframe, while still respecting higher timeframe context.
This indicator does not aim to predict the market.
It aims to filter noise and highlight asymmetric opportunities.
Pine Script® indicator
RV 52-Week High (Weekly)RV 52-Week High – Enjoy spotting every new weekly 52-week high with clear parrot‑green arrows
Pine Script® indicator






















