AI Predictive Flow (Zeiierman)█ Overview
AI Predictive Flow (Zeiierman) is a pattern-based oscillator that estimates future price direction by comparing the current market state to similar historical conditions.
Instead of relying on traditional indicators like momentum or moving averages alone, the script builds a multi-feature representation of price behavior and uses a k-Nearest Neighbors (kNN) model to identify past patterns that closely resemble the present.
From those matches, it derives an expected forward return, which is then transformed into a smooth oscillator and a predicted trend regime.
The result is a forward-looking signal that reflects a data-driven expectation based on similar past patterns, not just current price movement.
█ How It Works
⚪ Feature Extraction (Market State Model)
The script converts price into a compact feature set that describes the current market state.
It uses four core features:
Short-term return
Momentum
RSI bias
EMA spread
These are created inside the feature function:
feat(shift, mode) =>
c = close
c1 = close
cm = close
ef = ta.ema(close, fLen)
es = ta.ema(close, sLen)
r = ta.rsi(close, rsiLn)
float v = 0.0
if mode == 1
v := c1 != 0 ? math.log(c / c1) : 0.0
else if mode == 2
v := cm != 0 ? (c - cm) / cm : 0.0
else if mode == 3
v := (r - 50.0) / 50.0
else
v := c != 0 ? (ef - es) / c : 0.0
v
Each feature captures a different dimension of price behavior:
return measures immediate movement
momentum measures directional displacement
RSI bias measures internal pressure
EMA spread measures trend structure
These values are then stacked across multiple bars to form the pattern used for comparison.
⚪ Pattern Memory (Historical Pattern Library)
The script stores rolling sequences of each feature into separate matrices so the current market state can be compared against past states.
That process is built here:
pushFeat(mat, mode) =>
vals = array.new(tot, 0.0)
for i = 0 to tot - 1
array.set(vals, tot - 1 - i, feat(i, mode))
cur = array.slice(vals, tot - len, tot)
old = array.slice(vals, 0, len)
matrix out = matrix.new(1, len, 0.0)
for i = 0 to len - 1
matrix.set(out, 0, i, array.get(cur, i))
hist = array.new(len, 0.0)
for i = 0 to len - 1
array.set(hist, i, array.get(old, i))
if mat.rows() >= mem
mat.remove_row(0)
mat.add_row(mat.rows(), hist)
out
This creates:
a current feature row
a rolling history of prior feature patterns
So rather than comparing single-bar values, the model compares multi-bar pattern structure.
⚪ Pattern Matching Engine (kNN Distance Model)
Once the current feature pattern is built, it is compared to all stored historical patterns.
Distance is measured feature-by-feature across the full pattern length:
getDist(matrix a1, matrix a2, matrix a3, matrix a4, matrix b1, matrix b2, matrix b3, matrix b4) =>
out = array.new(b1.rows(), 0.0)
for i = 0 to b1.rows() - 1
s = 0.0
d1 = a1.diff(b1.submatrix(i, i + 1)).row(0)
d2 = a2.diff(b2.submatrix(i, i + 1)).row(0)
d3 = a3.diff(b3.submatrix(i, i + 1)).row(0)
d4 = a4.diff(b4.submatrix(i, i + 1)).row(0)
for j = 0 to len - 1
s += math.pow(d1.get(j), 2) * 0.25 +
math.pow(d2.get(j), 2) * 0.25 +
math.pow(d3.get(j), 2) * 0.25 +
math.pow(d4.get(j), 2) * 0.25
out.set(i, math.sqrt(s))
out
This produces a similarity score for every stored pattern. A smaller distance means the past setup looked more like the present one.
⚪ Prediction Model (kNN Forward Expectation)
After the distances are ranked, the script selects the nearest neighbors and averages their future outcomes.
The kNN model is implemented here:
knn(dist, n) =>
ix = dist.sort_indices()
useN = math.min(n, ix.size())
sumD = 0.0
avg = 0.0
for i = 0 to useN - 1
sumD += dist.get(ix.get(i))
if useN > 0
for i = 0 to useN - 1
d = dist.get(ix.get(i))
w = useN > 1 ? (sumD != 0 ? (1 - d / sumD) : 1.0) : 1.0
avg += Y.get(ix.get(i)) * w
avg
The forward return used for comparison is defined here:
y := math.log(base) - math.log(base )
This represents the forward return following each historical pattern. The result is a weighted expectation of future movement, not just a reading of current trend.
⚪ Predictive Oscillator
The raw kNN prediction is smoothed and transformed into the main oscillator and signal line.
pred_ = ta.ema(pred, smth)
if not na(pred)
predSm := smth > 1 ? pred_ : pred
osc = ta.ema(predSm, oscLn)
sig = ta.ema(osc, sigLn)
hist = osc - sig
This creates:
Oscillator = smoothed expected return
Signal line = secondary smoothing for crossover confirmation
Histogram = distance between oscillator and signal
⚪ Predicted Trend Regime
Beyond the oscillator, the script also builds a broader trend regime using the predicted price path.
First, the raw prediction is converted into a projected price line:
predLine := base + base * (math.exp(pred) - 1)
Then a regime band is created using ATR:
hiRef = predLine + bandM * atr
loRef = predLine - bandM * atr
if ta.highest(hiRef, regLn) == hiRef
trendUp := true
if ta.lowest(loRef, regLn) == loRef
trendUp := false
This background state represents:
bullish predicted regime when the projected path is pressing into new highs
bearish predicted regime when the projected path is pressing into new lows
So the background is not showing the raw price trend. It is showing the model’s predicted regime bias.
█ How to Use
⚪ Read the Oscillator
Above 0 → bullish expectation
Below 0 → bearish expectation
Near 0 → neutral/low conviction
Far from 0 → strong directional push
Use crossovers for entry timing:
Bullish crossover → potential upward continuation
Bearish crossover → potential downward continuation
⚪ Use the Predicted Trend Regime
The background highlights the model’s broader directional bias:
Green → predicted bullish regime
Red → predicted bearish regime
Regime shifts often indicate:
early trend transitions
continuation confirmation
structural changes in expectation
⚪ Combine Signals
Best use comes from alignment:
Oscillator above zero + bullish regime + signal → strong continuation bias
Oscillator below zero + bearish regime + signal → strong downside bias
Divergence between the two → caution / mixed signals
█ Settings
Pattern Length – Controls how many bars define the current pattern. Higher values capture more structure, lower values increase responsiveness.
Memory Size – Number of historical patterns stored for comparison. Larger values improve context but increase computation.
Neighbors (k) – Number of closest matches used in prediction. Lower values are more reactive, higher values are smoother.
Prediction Smoothing – EMA smoothing applied to the raw prediction. Reduces noise at the cost of lag.
Signal Length – Smoothing of the signal line used for crossover signals.
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
Aioscillator
Machine Learning Momentum Index (MLMI) [Zeiierman]█ Overview
The Machine Learning Momentum Index (MLMI) represents the next step in oscillator trading. By blending traditional momentum analysis with machine learning, MLMI delivers a potent and dynamic tool that aligns with the complexities of modern financial landscapes. Offering traders an adaptive way to understand and act on market momentum and trends, this oscillator provides real-time insights into market momentum and prevailing trends.
█ How It Works:
Momentum Analysis: MLMI employs a dual-layer analysis, utilizing quick and slow weighted moving averages (WMA) of the Relative Strength Index (RSI) to gauge the market's momentum and direction.
Machine Learning Integration: Through the k-Nearest Neighbors (k-NN) algorithm, MLMI intelligently examines historical data to make more accurate momentum predictions, adapting to the intricate patterns of the market.
MLMI's precise calculation involves:
Weighted Moving Averages: Calculations of quick (5-period) and slow (20-period) WMAs of the RSI to track short-term and long-term momentum.
k-Nearest Neighbors Algorithm: Distances between current parameters and previous data are measured, and the nearest neighbors are used for predictive modeling.
Trend Analysis: Recognition of prevailing trends through the relationship between quick and slow-moving averages.
█ How to use
The Machine Learning Momentum Index (MLMI) can be utilized in much the same way as traditional trend and momentum oscillators, providing key insights into market direction and strength. What sets MLMI apart is its integration of artificial intelligence, allowing it to adapt dynamically to market changes and offer a more nuanced and responsive analysis.
Identifying Trend Direction and Strength: The MLMI serves as a tool to recognize market trends, signaling whether the momentum is upward or downward. It also provides insights into the intensity of the momentum, helping traders understand both the direction and strength of prevailing market trends.
Identifying Consolidation Areas: When the MLMI Prediction line and the WMA of the MLMI Prediction line become flat/oscillate around the mid-level, it's a strong sign that the market is in a consolidation phase. This insight from the MLMI allows traders to recognize periods of market indecision.
Recognizing Overbought or Oversold Conditions: By identifying levels where the market may be overbought or oversold, MLMI offers insights into potential price corrections or reversals.
█ Settings
Prediction Data (k)
This parameter controls the number of neighbors to consider while making a prediction using the k-Nearest Neighbors (k-NN) algorithm. By modifying the value of k, you can change how sensitive the prediction is to local fluctuations in the data.
A smaller value of k will make the prediction more sensitive to local variations and can lead to a more erratic prediction line.
A larger value of k will consider more neighbors, thus making the prediction more stable but potentially less responsive to sudden changes.
Trend length
This parameter controls the length of the trend used in computing the momentum. This length refers to the number of periods over which the momentum is calculated, affecting how quickly the indicator reacts to changes in the underlying price movements.
A shorter trend length (smaller momentumWindow) will make the indicator more responsive to short-term price changes, potentially generating more signals but at the risk of more false alarms.
A longer trend length (larger momentumWindow) will make the indicator smoother and less responsive to short-term noise, but it may lag in reacting to significant price changes.
Please note that the Machine Learning Momentum Index (MLMI) might not be effective on higher timeframes, such as daily or above. This limitation arises because there may not be enough data at these timeframes to provide accurate momentum and trend analysis. To overcome this challenge and make the most of what MLMI has to offer, it's recommended to use the indicator on lower timeframes.
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Pine Script® indicator

