Pine3D: A Native 3D Graphical Rendering EnginePine3D is a full 3D rendering engine for TradingView, powered by Pine Script™ v6.
Pine3D pushes forward the frontier of TradingView 3D rendering capabilities, providing a fully fledged graphical engine under an intuitive, chainable, object oriented API. Build meshes, transform them in world space, light them, cast shadows, project them through a perspective camera, and render the result directly on your chart, all without ever bothering about trigonometry synchronization or optimization.
The library brings forth a streamlined process for anyone that wishes to visualize data in 3D, without needing to know anything about the complex math that has previously gatekept such indicators. Pine3D does all the heavy lifting, including extreme optimization techniques designed for production ready indicators.
The entire API is chainable and tag addressable, so spawning a mesh, registering it, pointing the camera at it, and rendering the frame is a four line affair:
Mesh mybox = cube(40.0, color.orange).setTag("hero").rotateBy(0.0, 45.0, 0.0)
scene.add(mybox)
scene.lookAt("hero")
render(scene)
🔷 SURFACES: CONTOUR BAND RENDERING
Pine Script imposes a hard ceiling of 100 polylines and 500 lines per indicator . On the surface this looks fatal for dense 3D meshes: every triangle drawn naively burns one of those 100 slots, or two of the 500, and the budget evaporates within a few hundred faces.
The conventional escape hatch is strip stitching , tracing a polyline forward along one row of a grid and back along the next, packing a ribbon of quads into a single drawing slot. It buys a meaningful multiplier, but it pays for that multiplier with two structural constraints baked into the geometry itself:
One color per strip. A polyline carries a single stroke and fill color, so every cell along the ribbon must share the same shade. The moment you want per cell lighting, contour banding, or value driven gradients, every color change forces a new polyline and the budget collapses.
One contiguous ribbon per slot. Strips can only describe topologically connected runs of cells. Disjoint regions, holes, islands, and value clustered fragments scattered across the surface each demand their own polyline.
Pine3D breaks both constraints at once.
At the core of the engine sits an innovation that redefines the limits for visual fidelity: contour band rendering using degenerate bridge stitching . The technique quantizes a surface's elevation into colored bands, then collapses every cell that falls inside the same band, no matter where it sits on the screen , into one continuous, hole aware polyline path per band, threading invisible zero width bridges between disjoint islands so that a single polyline can carry thousands of polygon equivalent fragments scattered across the geometry.
The result:
A single polyline can render up to 2,000 disconnected triangle equivalents , spread across arbitrarily separated regions of the surface.
Theoretical ceiling of around 200,000 disconnected faces inside the 100 polyline budget, a regime that strip based stitching cannot enter at any color count above one.
A 40 x 40 heightmap (around 3,000 triangles) renders inside the budget with full per band contour coloring and room to spare. Stress harnesses have run 40 x 80 grids .
Each band's path is depth sorted and near plane culled, and cached between bars , so once geometry is built only the screen space projection runs per frame.
This algorithm enables scenes with extreme detail relative to the 100 polyline limit, and shifts the optimization focus from "drawing limits" to "CPU limits", which Pine3D natively handles with aggressive caching at every layer of the pipeline. The contour technique is currently integrated into the surface() function, with the same compression strategy generalizable to any mesh class and ultimately full scene rendering in future versions.
Non-uniform grids out of the box. surface() accepts optional axisX and axisZ arrays that override the default uniform spacing with custom column and row positions. This means logarithmic strike spacing on an option volatility surface, irregular timestamp spacing on a market depth heatmap, or any other non-evenly-sampled grid renders correctly without resampling the data first. The contour band engine, axis ticks, and gridBox cage all snap to the custom positions automatically.
A full contour surface is just a handful of lines; the damped ripple below builds once and never needs updating:
//@version=6
indicator("Pine3D - Contour Surface", overlay = false, max_polylines_count = 100, max_lines_count = 500, max_labels_count = 500)
import Alien_Algorithms/Pine3D/1 as p3d
var p3d.Scene scene = p3d.newScene()
var p3d.Mesh heatmap = na
if barstate.isfirst
// Damped cosine ripple
int N = 20
matrix data = matrix.new(N, N, 0.0)
for r = 0 to N - 1
for c = 0 to N - 1
float dx = c - (N - 1) / 2.0
float dz = r - (N - 1) / 2.0
float d = math.sqrt(dx * dx + dz * dz) * 0.7
data.set(r, c, math.cos(d) * math.exp(-d * 0.12) * 50.0)
heatmap := p3d.surface(data, 200.0, color.blue, color.red, 24)
.gridBox()
.gridLabels(color.white, "X", "Amplitude", "Z")
scene.add(heatmap)
scene.camera.orbit(35.0, 25.0, 380.0)
if barstate.islast
p3d.render(scene, lighting = true)
🔷 TRAIL3D: STREAMED OSCILLATOR PATHS
Trail3D is a first class streaming primitive built for visualizing two correlated time series as a 3D ribbon evolving through time. You give it a rolling buffer capacity and push (u, v) samples bar by bar; the primitive maintains the buffer, builds the ribbon geometry, and renders it inside a normalized bounding cube so the path always fits cleanly in view regardless of the underlying data range.
Under the hood, Trail3D is a coordinated bundle of polylines: one for the main ribbon, two for optional shadow projections onto the back wall and floor, and one for the wireframe cage. All four are depth sorted and occlusion clipped against the rest of the scene, and the primitive auto normalizes incoming samples against the rolling window's min/max so streaming data always fills the cube without manual scaling.
This enables a class of visualizations that would otherwise require dozens of polylines and manual buffer management: phase space portraits, Lissajous figures, oscillator pair correlations, attractor trajectories, and any "two indicators evolving together over time" study. The demo above shows a sine and cosine pair pushing samples each bar to trace a clean spiral inside the cage, the same pattern you would use to plot RSI vs MFI, momentum vs volatility, or any custom (u, v) signal pair.
A full streamed scene is a handful of lines:
//@version=6
indicator("Pine3D - Trail3D", overlay = false, max_polylines_count = 100, max_lines_count = 500, max_labels_count = 500)
import Alien_Algorithms/Pine3D/1 as p3d
var p3d.Scene scene = p3d.newScene()
var p3d.Trail3D trail = na
if barstate.isfirst
trail := p3d.trail3D(220.0, 200, color.yellow)
.cage(true)
.axisLabels("sin", "cos", color.white)
trail._uProj.col := #00ffff69
trail._vProj.col := #ff00ff71
scene.add(trail)
scene.camera.orbit(215.0, 20.0, 360.0)
float phase = bar_index * 0.15
float sinX = math.sin(phase) * 100.0
float cosY = math.cos(phase) * 100.0
if barstate.isconfirmed
trail.pushSample(sinX, cosY)
p3d.render(scene)
🔷 BARS3D: CATEGORICAL 3D BAR CHARTS
bars3D() turns any series of values into a fully lit, depth sorted 3D bar chart in a single call. Each bar is height mapped to its value, color graded between a low and high color, and packed into one combined mesh with per bar depth grouping so individual bars sort correctly even inside the merged geometry. The companion updateBars() mutator refreshes heights, colors, and labels in place every bar without rebuilding geometry, making it suitable for live rankings, rolling windows, and animated comparisons.
The chainable barLabels(catNames, valNames) helper attaches category labels at the base of each bar and value labels at the top, both depth sorted with the rest of the scene. Category labels are set once at build time, while value labels can be passed to updateBars(values, valLabels = ...) each frame to reflect live data. Combined with wireGrid() for the floor and a contour surface() in the background, bars3D() becomes the centerpiece of dashboards comparing assets, sectors, timeframes, or any categorical metric.
Negative values are handled automatically: bars below zero extrude downward from the base plane with reversed face winding, so signed series like PnL, delta, or momentum histograms render correctly without any extra setup.
A complete labeled bar chart is just a few lines:
//@version=6
indicator("Pine3D - Bars3D", overlay = false, max_polylines_count = 100, max_lines_count = 500, max_labels_count = 500)
import Alien_Algorithms/Pine3D/1 as p3d
var p3d.Scene scene = p3d.newScene()
var p3d.Mesh bars = na
array values = array.from(volume - volume , volume - volume , volume - volume , volume - volume , volume - volume , volume - volume )
array names = array.from("ΔV0", "ΔV-1", "ΔV-2", "ΔV-3", "ΔV-4", "ΔV-5")
if barstate.isfirst
bars := p3d.bars3D(values, 30.0, 30.0, 10.0, color.blue, color.red, 200.0)
.barLabels(names)
scene.add(bars)
p3d.wireGrid(scene, 300.0, 300.0, 6, 6, color.new(color.gray, 80))
scene.camera.orbit(215.0, 25.0, 360.0)
if barstate.islast
bars.updateBars(values)
p3d.render(scene, lighting = true)
Omitting valLabels in updateBars() tells the engine to auto format each numeric value via str.tostring() . Pass valLabels only when you need custom strings.
🔷 SCATTER CLOUDS: POINTS IN 3D SPACE
Pine3D treats scatter clouds as a first class use case without needing a dedicated scatter API. Because Label3D is the primitive and scene.add(array) is a single batch operation, you can scatter up to 500 points anywhere in 3D space, each with independent color, symbol, size, and tooltip , and have them depth sorted and occlusion clipped against the rest of the scene automatically.
Each point is a fully addressable Label3D with mutable fields. You can change position , textColor , bgColor , labelStyle (any label.style_* glyph including circles, squares, diamonds, triangles, crosses, arrows, flags), labelSize (any size.* preset), and text per point per bar. The renderer reads these mutations every frame, so animation is just direct field assignment.
This unlocks a wide class of visualizations: clustered data scatter, K means visualizations, particle systems, parametric surfaces sampled as point clouds, gradient colored attractors, multi class classification overlays, and structured curves like the demo above. The double helix demo plots two intertwined parametric strands as ~500 points with alternating colors and per point sizing, all inside the standard scene.add(array) pipeline.
The pattern is straightforward: build the array once in barstate.isfirst , add it to the scene, then mutate point fields per bar to animate.
//@version=6
indicator("Pine3D - Scatter Cloud", overlay = false, max_polylines_count = 100, max_lines_count = 500, max_labels_count = 500)
import Alien_Algorithms/Pine3D/1 as p3d
var p3d.Scene scene = p3d.newScene()
var array points = array.new()
if barstate.isfirst
for i = 0 to 499
p3d.Vec3 pos = p3d.vec3(0.0, 0.0, 0.0)
points.push(p3d.Label3D.new(position = pos, txt = "•"))
scene.add(points)
scene.camera.orbit(35.0, 20.0, 400.0)
if barstate.islast
for i = 0 to points.size() - 1
float t = i * 0.05 + bar_index * 0.01
p3d.Label3D pt = points.get(i)
pt.position := p3d.vec3(80.0 * math.cos(t), i * 0.4 - 100.0, 80.0 * math.sin(t))
pt.textColor := i % 2 == 0 ? color.aqua : color.fuchsia
p3d.render(scene)
----------------------------------------------------------------------------------------------------------------
🔷 TWO LAYER ARCHITECTURE
Pine3D ships as a clean, two layer library:
🔸 Layer 1 - DIY API. First principle building blocks ( Vec3 , Mesh , Camera , Light , Scene , plus world space overlay primitives) for total creative control. Author your own geometry, camera behavior, lighting setup, and scene graph from scratch.
🔸 Layer 2 - High Level Helpers. Production ready wrappers like surface() , bars3D() , trail3D() , updateBars() , updateSurface() , sphere() , torus() , cylinder() , and wireGrid() , plus chainable contour helpers gridBox() and gridLabels() that wrap the primitives into a few lines of code. Scatter clouds use the standard Label3D primitive directly.
The object model is chainable and scene oriented, so complex setups still read cleanly.
🔷 FEATURE LIST
Contour Surface Rendering - The most powerful 3D surface engine ever released for Pine Script. Render tens of thousands of polygon equivalent faces using a single polyline per contour band, delivering smooth, continuous terrain with natural ridges and valleys.
Adaptive Rail Sharing - Solid meshes drawn with the default linefill backend reuse one edge line between adjacent coplanar faces, averaging roughly 1.6 lines per face instead of the naive two, pushing practical mesh capacity up to ~360 faces depending on topology.
Interior Face Culling on Merge - mergeMeshes(meshes, removeInterior = true) detects coincident faces with opposing normals and strips them, so voxel style scenes (stacked cubes, block walls, lattice geometry) ship only their exterior shell and spend no budget on hidden interior faces.
True Perspective Camera System - Full 3D camera with position, target, fov, and orbit() controls. Supports cinematic camera movement, lookAt by mesh tag, and realistic depth.
Real Time Lighting and Shadows - Directional and point lights with configurable ambient, shadow strength, self shadowing, and a spatial grid acceleration structure for fast shadow queries.
High Performance Update System - updateSurface() and updateBars() let you animate massive datasets bar by bar without rebuilding geometry, keeping CPU usage minimal.
Rich Primitive Library - Cubes, cuboids, spheres, cylinders, tori, pyramids, planes, discs, circles, custom meshes, and the groundbreaking bars3D() with automatic labels.
Streamed Trail Primitive - trail3D() maintains a rolling buffer of (u, v) samples and renders them as a 3D ribbon inside a bounding cube, with optional projections onto the back wall and floor and a wireframe cage.
Depth Sorted Overlays - 3D labels, lines, polylines, wire grids, and trails, all correctly occluded and painter sorted against the rest of the scene.
Professional Contour Helpers - gridBox() and gridLabels() automatically add clean bounding boxes and axis titles, ticks, and series names that refresh on every updateSurface() call.
Tag Based Scene Graph - Every Mesh , Label3D , Line3D , and Polyline3D can carry a string tag. Scene exposes getMesh() , getLabel() , getLine() , getPolyline() , lookAt() , and remove() by tag, turning your scene into a lookup by name graph instead of an index juggling exercise.
Chainable, Intuitive API - Everything is designed for maximum readability and speed of development. Build complex scenes in just a few lines.
Production Ready Optimizations - World vertex caching, view projection caching, face preprocessing cache, shadow grid cache, and contour geometry cache, all managed automatically.
----------------------------------------------------------------------------------------------------------------
🔷 THE RENDERER
Every frame is produced by a single call to render(scene, ...) . The renderer runs the full pipeline: world transform, camera transform, back face culling, occlusion culling, depth sort, directional or point lighting with shadows, and perspective projection.
⚠ render() clears the entire chart drawing pool at the start of every call - every polyline , line , label , and linefill on the chart is deleted before Pine3D redraws, not just the ones it created. If you mix Pine3D with manual label.new() , line.new() , or similar calls, those drawings must be emitted after render() or they will be wiped every frame.
🔸 Setup Requirements. Pine3D consumes polylines, lines, and labels simultaneously, so your indicator() declaration must raise all three budgets, and the library must be imported under an alias:
indicator("My 3D Scene", overlay = false,
max_polylines_count = 100,
max_lines_count = 500,
max_labels_count = 500)
import Alien_Algorithms/Pine3D/1 as p3d
🔸 render() parameters.
maxFaces (int, default 100). Hard cap on solid faces drawn per frame. Contour bands, wireframe edges, labels, lines, and overlay polylines are not counted against this cap, and are bounded only by TradingView's global 100 polyline / 500 line / 500 label budgets.
culling (bool, default true). Enable back face culling.
lighting (bool, default false). Enable diffuse shading. Reads scene.light if set; otherwise falls back to the render() args.
lightDir (Vec3). Overrides scene.light.direction when provided. Points toward the light.
ambient (float, default 0.3). Minimum brightness for shadowed faces (0.0-1.0).
wireframe (bool, default false). Force outline only output for the entire scene.
occlusion (bool, default true). Sparse raster pass that drops hidden faces before drawing. Major perf win on dense scenes.
occlusionRaster (int, default 768). Raster resolution of the occlusion buffer. Lower = faster but coarser; higher = stricter hidden face rejection.
Explicit render() args always win over scene.light , which makes render() the right place for ad hoc, per frame lighting tweaks.
----------------------------------------------------------------------------------------------------------------
🔷 MESH DRAWING MODES
Two independent axes control how a mesh appears on the chart:
🔸 Style (via mesh.setStyle(...) ) - what gets drawn:
"solid" . Filled faces. Default.
"wireframe" . All edges, no fill. Shows interior geometry.
"wireframe_front" . Only front facing edges. Cleaner silhouette for convex meshes.
🔸 Draw Mode (via mesh.drawMode ) - which TradingView primitive carries the solid faces:
"linefill" (default). Uses the line and linefill budgets. An adaptive rail sharing optimization reuses one edge line between adjacent coplanar faces, pushing practical capacity up to ~360 faces per mesh depending on topology. Supports in place updates via updateSurface() and updateBars() . Rails are drawn transparent, so solid faces in this mode have no visible outline - use a wireframe style or "poly" drawMode if you need stroked edges. Recommended for all new code.
"poly" . Legacy polyline backend. Capacity ~100 faces, no in place updates, but renders the face outline using mesh.lineStyle and mesh.lineWidth . Use only when you need styled solid face outlines.
Wireframe styles always render with line primitives regardless of drawMode. Stroke width and style on edges (and on poly mode face outlines) come from mesh.lineWidth and mesh.lineStyle , which you mutate by direct field assignment.
----------------------------------------------------------------------------------------------------------------
🔷 QUICK START
The best practice lifecycle is simple:
Create one persistent Scene with newScene() .
Build meshes and helper overlays once in barstate.isfirst .
On later bars, mutate objects in place with transforms or helper mutators like updateBars() and updateSurface() .
Call render(scene, ...) once per frame. It automatically clears the previous chart drawings.
A complete, lit, animated 3D scene is still a handful of lines:
//@version=6
indicator("My First 3D Scene", overlay = false, max_polylines_count = 100, max_lines_count = 500, max_labels_count = 500)
import Alien_Algorithms/Pine3D/1 as p3d
var p3d.Scene scene = p3d.newScene()
var p3d.Mesh sun = na
if barstate.isfirst
scene.setLightDir(1.0, -1.0, 0.5).setAmbient(0.3)
sun := p3d.sphere(50.0, 16, 12, color.orange).setTag("sun")
scene.add(sun)
p3d.wireGrid(scene, 300.0, 300.0, 6, 6, color.new(color.gray, 80))
scene.camera.orbit(35.0, 25.0, 220.0)
if barstate.islast
sun.rotateBy(0.0, 1.5, 0.0)
p3d.render(scene, lighting = true)
----------------------------------------------------------------------------------------------------------------
🔷 RECOMMENDED USAGE PATTERN
Use your Scene and major meshes in var .
Build geometry once in barstate.isfirst .
Use updateSurface() and updateBars() on later bars instead of rebuilding meshes.
Use scene level helpers like wireGrid() when you want overlays added immediately.
Use trail3D() when you want a streamed oscillator style path with built in wall projections and cage geometry.
For scatter clouds, build an array once, hand it to scene.add(pts) , then mutate pt.position , pt.textColor , etc. each bar to animate.
Use mesh level gridBox() and gridLabels() (contour) and barLabels() (bars) to attach overlays to the mesh setup chain. They are drained into the scene by scene.add(mesh) .
🔷 CONSIDERATIONS
scene.clear() vs render(). scene.clear() removes objects from the scene graph (meshes, labels, lines, polylines). render() only clears the previous frame's TradingView drawings and redraws from the current scene graph. You almost never need scene.clear() in the build once and update pattern.
Global scope series for updateSurface() / updateBars(). If your data uses Pine's history operator ( ) or calls functions like ta.rsi() , ta.atr() , request.security() , those must be declared at global scope so Pine tracks their bar by bar history. Calling them inside barstate.islast produces inconsistent results or compiler errors.
gridLabels() tick values auto refresh. When you call updateSurface() , any tick value labels created by gridLabels() are automatically updated to reflect the new data range. Axis titles and positions stay constant. You don't need to rebuild them.
barLabels() value labels via updateBars(). Create category labels once with mesh.barLabels(catNames) at build time, then pass valLabels to updateBars() on each frame. Value labels are refreshed automatically. Don't call barLabels() again.
Lighting convenience methods are chainable. scene.setLightDir() , setLightPos() , setLightMode() , setAmbient() , setShadowStrength() , and showLightSource() all return Scene and can be chained: scene.setLightMode("point").setLightPos(0, 200, 150).setAmbient(0.25) .
Mesh transforms return Mesh. moveTo() , moveBy() , rotateTo() , rotateBy() , scaleTo() , scaleUniform() , setTag() , setStyle() , setColor() , show() , hide() all return Mesh for chaining: mesh.moveTo(0, -20, 0).rotateTo(0, 45, 0).setStyle("solid") .
Degrees vs radians. rotateTo() and rotateBy() on Mesh expect degrees. The low level Vec3.rotateX/Y/Z() methods expect radians.
scene.lookAt() is tag only. scene.lookAt(t) accepts a string tag and points the camera at that mesh. To aim the camera at an arbitrary Vec3 , call scene.camera.lookAt(vec) directly.
remove(tag) removes one object. The search order is meshes, then labels, then lines, then polylines, and the first hit wins. Avoid reusing tags across primitive types if you intend to delete by tag.
Shadow grid acceleration is directional light only. The spatial shadow grid is only built when lightMode == "directional" . Point lights fall back to a linear O(M) scan, so heavy shadow scenes are fastest in directional mode.
guiShift and yOffset. scene.guiShift and scene.yOffset position the 3D viewport on the chart without consuming historical bar slots. Increase guiShift to push the scene rightward into future bar space; adjust yOffset to slide it vertically in price units.
bar_time projection. All chart drawings are emitted with xloc.bar_time , so the scene can sit arbitrarily far left or right of bar_index without forcing Pine to extend its history buffer. This is what keeps the engine stable on long charts and future projected scenes.
barLabels() without values. When you call mesh.barLabels(catNames) and omit value labels, every later updateBars(values) auto formats the numeric values via str.tostring() . Pass valLabels only when you need custom strings.
Direct mesh.vertices mutation requires invalidateCache(). Transform mutators ( moveTo , rotateBy , scaleTo , etc.) invalidate the world vertex cache on their own. Only raw index writes like mesh.vertices.set(i, newVec) need a manual mesh.invalidateCache() call to force re-projection. Skipping it will make the renderer draw stale geometry.
Drawing budgets fail silently. If a scene emits more than 100 polylines, 500 lines, or 500 labels in a single frame, TradingView silently drops the overflow without raising a runtime error. Missing geometry almost always means a budget overrun - lower maxFaces , drop a contour level, or simplify overlay primitives to bring the frame back inside the caps.
render() deletes non Pine3D drawings too. Every render() call clears polyline.all , line.all , label.all , and linefill.all before redrawing. Any manual label.new() , line.new() , etc. issued before render() in the same frame will be wiped. Issue custom drawings after the render call if you need them to persist.
mergeMeshes() preserves depth grouping. When every source mesh passed into mergeMeshes() has the same vertex and face count (e.g. identical primitives in a voxel grid), the merged mesh auto derives depth group boundaries so the combined geometry still sorts correctly per original instance. Mixing primitives with different topologies disables the grouping.
CPU timeouts: knobs to turn. Pine Script enforces a per bar execution budget, and dense scenes can trip it before the drawing budget ever does. If a scene compiles but times out at runtime, reach for these levers in order: lower occlusionRaster (e.g. 768 -> 384) for the biggest single perf win, reduce maxFaces to cap the solid face pool, drop levels on contour surfaces, simplify sphere/torus segment counts, and gate heavy work behind barstate.islast so history bars only build geometry rather than render it.
----------------------------------------------------------------------------------------------------------------
🔷 MORE EXAMPLES
The following scenes were all built entirely in Pine Script™ v6 using Pine3D as the rendering layer. They exist to demonstrate that the library is a real engine capable of complex, production grade visualizations.
🔸 4D Hypercube (Tesseract). A rotating tesseract, projected from 4D to 3D to 2D in real time using a custom 4D rotation matrix layered on top of Pine3D's standard projection pipeline.
🔸 Solar System. Following the publication of my 3D Solar System back in 2024, which introduced new graphical rendering concepts into Pine Script, we have seen a wave of various interpretations of the underlying vector classes, ranging from tutorials to niche specific integrations using hardcoded math. It became clear that a unified architecture was needed, one that would lower the barrier to entry while simultaneously handling the optimization process, which is both complex and error prone to do manually.
That architecture is what Pine3D delivers. Below is a re-creation of the classic 3D Solar System rebuilt entirely on top of the library. It uses a fraction of the original code , renders roughly 5x faster , and adds real lighting cast directly from the Sun , all while consuming only a third of the available drawing budget thanks to the occlusion and culling mechanisms Pine3D handles out of the box.
----------------------------------------------------------------------------------------------------------------
🔷 API REFERENCE
🔸 Top Level Entry Points. newScene() creates a ready to use Scene with a default camera and light. render(scene, ...) draws the current frame and auto clears the previous frame's chart drawings; see the Renderer section above for the full parameter list. vec3(x, y, z) creates a Vec3. colorBrightness() is an exported color utility helper.
🔸 Mesh Factories.
Primitives - cube() , cuboid() , pyramid() , plane() , sphere() , cylinder() , torus() , grid() , disc() , circle() for ready made geometry.
customMesh(verts, faces) - Low level escape hatch for authoring your own topology.
mergeMeshes(meshes, tag, removeInterior) - Bakes transforms and combines many meshes into one. With removeInterior = true , coincident faces with opposing normals (e.g. shared walls between adjacent cubes in a grid) are culled so only the exterior shell survives, a major optimization for dense voxel style scenes.
surface(heights, size, lowCol, highCol, levels, axisX, axisZ) - Creates a contour surface mesh.
bars3D(values, barWidth, barDepth, spacing, lowCol, highCol, maxHeight) - Creates a combined 3D bar chart mesh; add labels with the chainable barLabels(names, values) method.
🔸 UDT Constructors. Overlay primitives and face descriptors are plain UDTs. Because these types have many fields, always instantiate them with named arguments rather than positional, e.g. Label3D.new(position = pos, txt = "•") :
Face - fields: vi (array of vertex indices into the parent mesh), col . Used when authoring customMesh() topology; every face must have at least 3 indices and should be planar.
Label3D - fields: position , txt , textColor , bgColor , labelStyle , labelSize , fontFamily , tooltip , visible , tag . Only position is required.
Line3D - fields: start , end , col , width , visible , tag , lineStyle .
Polyline3D - fields: points , col , fillColor , width , closed , visible , tag , lineStyle .
Vec3.new(x, y, z) or the vec3(x, y, z) shorthand.
🔸 Trail Primitive. trail3D(size, capacity, trailCol, minSamples) creates a streamed Trail3D primitive with a main trail, two projection polylines, and a cage polyline. capacity is internally clamped to 300 samples to keep the rolling buffer inside Pine's execution budget; passing a larger value silently resolves to 300. minSamples (default 60) is the sample count at which the cage reaches its full cube width: below that the cage stays cube shaped and samples stretch across it; above that the cage grows rightward at a fixed step until capacity is hit. scene.add(trail) registers the sub primitives into the scene. Trail3D methods: pushSample() , axisLabels() , cage() , moveTo() , show() , hide() .
🔸 Mesh Methods.
Transform - moveTo() , moveBy() , rotateTo() , rotateBy() , scaleTo() , scaleUniform() .
Appearance - setColor() , setFaceColor() , setStyle() , show() , hide() , setTag() .
Stroke styling (direct) - mesh.lineWidth := 3 and mesh.lineStyle := line.style_dashed control width and style of every visible mesh edge in wireframe modes and the outline of solid faces in drawMode = "poly" .
Shadow opt out (direct) - mesh.castShadow := false excludes the mesh from shadow casting while still receiving light. Useful for ghost overlays, debug geometry, or semi transparent meshes you do not want occluding the scene.
Lifecycle - clone() , faceCount() , invalidateCache() .
Data mutation - updateSurface() and updateBars() refresh persistent meshes in place. updateBars() refreshes any bar label positions automatically; pass catLabels / valLabels to also update the text.
Contour helpers - gridBox() and gridLabels() queue overlays on the mesh and hand them to the scene when you call scene.add(mesh) .
Bar helpers - barLabels() is chainable on a bars3D() mesh and queues its category and value labels for the next scene.add(mesh) .
Note: rotateTo() and rotateBy() expect degrees. The low level Vec3.rotateX/Y/Z() methods work in radians.
🔸 Scene Methods.
Lighting - setLightDir() , setLightPos() , setLightMode() , setAmbient() , setShadowStrength() , showLightSource() .
Scene graph - add(mesh) , add(label) , add(array) , add(line) , add(polyline) , add(trail) , remove(index) , remove(tag) , clear() .
Lookup and navigation - getMesh() , getLabel() , getLine() , getPolyline() , lookAt() , totalFaces() .
Cache control - invalidateLightCache() after mutating light direction or scene bounds externally; invalidateAllCaches() to also invalidate every mesh's world vertex cache (use after directly mutating mesh.vertices ).
Note: scene.clear() clears the scene graph itself. render() only clears the previous frame's TradingView drawings.
🔸 Camera Methods. setPosition(x, y, z) moves the camera. lookAt(x, y, z) / lookAt(vec3) points at a world space target. orbit(angleX, angleY, distance) does a spherical orbit around the current target. setFov(val) sets the perspective scale factor. Camera fields ( position , target , fov ) are also directly mutable via assignment when you need to tune them outside the provided setters, e.g. scene.camera.fov := 1200.0 .
🔸 Light Field Mutation. In addition to the scene level convenience setters, every field on scene.light is directly mutable for fine grained tuning: scene.light.selfShadow := true enables self shadowing, scene.light.shadowBias := 0.2 adjusts the shadow acne offset, scene.light.shadowStrength and scene.light.ambient are also exposed. Mutate them after newScene() or between frames; the renderer reads them every call.
🔸 Vec3 Methods. Core math: add() , sub() , scale() , negate() , dot() , cross() , length() , normalize() , distanceTo() , lerp() . Rotation and helpers: rotateX() , rotateY() , rotateZ() , copy() , toString() .
🔸 Overlay Primitive Methods.
Label3D - moveTo() , moveBy() , setText() , setTextColor() , setTooltip() , show() , hide() , setTag() .
Line3D - setStart() , setEnd() , setPoints() , setColor() , show() , hide() , setTag() .
Polyline3D - setColor() , show() , hide() , setTag() .
Every UDT field is mutable via direct assignment for properties without a chainable setter:
Label3D - bgColor , labelStyle (label.style_*), labelSize (size.*), fontFamily (font.family_*), visible .
Line3D - width , lineStyle (line.style_solid / _dashed / _dotted / _arrow_left / _arrow_right / _arrow_both), visible .
Polyline3D - width , lineStyle (line.style_solid / _dashed / _dotted only; arrow styles are not supported by TradingView's polyline primitive), fillColor , closed , visible .
Mutations are read per frame by the renderer, so they animate freely.
🔸 High Level Scene Helpers. wireGrid(scene, w, d, divX, divZ, col) adds a depth sorted ground grid. scene.add(array) adds a batch of labels in one call - the idiomatic way to push a scatter cloud into the scene.
🔸 Mesh Level Chainable Overlays. mesh.barLabels(names, values, ...) adds category and value labels on a bars3D() mesh. mesh.gridBox(col, divs) adds a wireframe bounding box cage on a surface() mesh. mesh.gridLabels(col, xName, yName, zName, ticks, fmt) adds axis titles and tick value labels on a surface() mesh; tick values auto refresh on updateSurface() . All three are queued on the mesh and drained into the scene by scene.add(mesh) .
----------------------------------------------------------------------------------------------------------------
This work is licensed under (CC BY-NC-SA 4.0) , meaning usage is free for non-commercial purposes given that Alien_Algorithms is credited in the description for the underlying software. For commercial use licensing, contact Alien_Algorithms
Indicators and strategies
TASC 2026.05 The AutoTune Filter█ OVERVIEW
This script implements the AutoTune Filter described by John F. Ehlers in the article "A Rolling Autocorrelation Function" from the May 2026 edition of the TASC Traders' Tips . The script analyzes rolling autocorrelation in filtered price data to calculate a band-pass filter that dynamically adjusts to apparent dominant cycles.
█ CONCEPTS
Autocorrelation function (ACF)
Autocorrelation measures the correlation of a time series with a lagged version of itself. The autocorrelation function (ACF) evaluates autocorrelation across a range of lags to gauge the extent to which values in a series vary jointly with previous values at different offsets.
The ACF can help traders identify patterns and trends in stochastic market data, characterize long-range dependence in a series, and more. In his article, Ehlers explains how the ACF can serve as a "bridge" between analysis in the time and frequency domains for identifying dominant cycles in market data.
Ehlers notes that at low lags, such as one bar, the autocorrelation in price data tends to be very high because prices don't often change dramatically from one bar to the next. As the lag increases, autocorrelation often decreases, reaching near zero for offsets at which the latest prices do not show a clear relationship with past prices.
However, he also observed that at specific lags, anticorrelation (negative correlation) can emerge, where the current values in the series move in one direction while past values move in the opposite direction. Based on this observation, he suggests that a lag with strong anticorrelation can indicate a significant cycle in the market data, where the cycle length is twice that of the analyzed lag.
To understand why this behavior can indicate significant cycles, consider a sine wave that completes a full oscillation every 20 bars. If the series is currently moving up, it will then move down 10 bars later, and then complete the cycle by moving up again 10 bars after that. The ACF of that sine wave returns a value of -1 for a lag of 10 bars, but not for other lower lags or higher lags up to 20.
In other words, a pure sine wave with a given period has perfect anticorrelation with a delayed version of itself that is offset by half of that period.
While market data does not typically behave like a pure sine wave, the same underlying principle applies: if the current prices exhibit a strong anticorrelation with previous prices at a given offset, a dominant cycle with a length of twice that offset is likely present in the current data.
AutoTune Filter
Ehlers proposes that traders can use the dominant cycle obtained via autocorrelation to set the critical period of a filter. Tuning a filter to respond most strongly to the measured cycle may promote more consistency in time alignment and help reduce destructive phase shifts.
He demonstrates one such implementation with his AutoTune Filter, an adaptive band-pass filter whose center period dynamically increments toward the dominant cycle calculated from an ACF over a given window.
The steps to calculate the AutoTune Filter are as follows:
Apply a two-pole high-pass filter to the series to reduce the effect of low-frequency (long-period) cycles on the autocorrelation calculation. The filtered series emphasizes cycles with lengths up to the specified cutoff period, and attenuates all others.
Compute the rolling ACF of the filtered data across the same window length as the filter's cutoff period.
Check the autocorrelation for each lag period, and identify the smallest lag with the lowest autocorrelation value. Multiply that lag by two to obtain the dominant cycle for the analyzed window.
If the difference between the current and previous dominant cycle is greater than two, limit the result for the current bar to two greater or less than the previous cycle's value to prevent large, sudden shifts in the filter's center period.
Finally, compute a band-pass filter using the value from step 4 as the center period.
█ USAGE
This indicator includes four display modes to visualize the AutoTune Filter's calculations:
"High-pass filter" : Plots the high-pass filtered data that the script analyzes for autocorrelation calculations.
"Min. correlation" : Plots the lowest autocorrelation value calculated for the filtered series over the analyzed window.
"Dominant cycle" : Plots the dominant cycle value that the final filter uses for its center period.
"Tuned band-pass filter" (default): Plots the final band-pass filtered result, i.e., the AutoTune filter.
Ehlers suggests that traders can identify peaks and valleys in prices for potential mean reversion signals by analyzing the rate of change in the tuned band-pass filter. If the rate of change is zero, the current price might be near a local high if the filter's value is positive, or near a local low if the value is negative.
Users can analyze the additional outputs to gain further insight into the filter's behaviors, and they can pass these plotted values to other scripts via source inputs for easy use in other custom calculations.
█ INPUTS
The indicator includes the following inputs in the "Settings/Inputs" tab:
Source: The series of values to process.
Window: The window length of the ACF calculation, and the cutoff period of the high-pass filter. The maximum possible dominant cycle length is two times this value.
Output: One of the four display modes ("High-pass filter", "Min. correlation", "Dominant cycle", or "Tuned band-pass filter").
Smart Trader, Episode 06, Isotropic Trend Lines🔷 WHAT IS ST-EP06 — ISOTROPIC TREND LINES?
ST-EP06 is a multi-scale structural trend channel indicator built on a σ-normalized coordinate system. It is designed to solve one of the oldest unaddressed problems in technical analysis:
trend angles that cannot be compared across instruments, timeframes, or volatility regimes.
A trend line drawn on a chart appears to carry a measurable angle — yet that angle is an artifact of the display window, not a property of the market. Resize the chart horizontally and the slope flattens; compress it and the slope steepens. A given price movement on Gold daily and Bitcoin 1-hour may produce visually identical slopes on screen while reflecting entirely different structural conditions. This happens because traditional charts use a coordinate space where the vertical axis (price) and the horizontal axis (time) share no fixed dimensional relationship.
The consequence is not merely cosmetic. A trader cannot meaningfully compare the steepness of a trend on one instrument with another — or even across timeframes on the same instrument — because the weight of "one unit of price per bar" varies with the instrument's current volatility.
As the author of this indicator, I sought a coordinate system where trend angles would be an intrinsic structural property of the market, independent of charting software or display settings. The goal: a space where a 30° uptrend on EUR/USD weekly carries the same structural meaning as a 30° uptrend on NASDAQ 5-minute — indicating that each market is moving at the same rate relative to its own realized volatility.
The solution draws on the principle of dimensional analysis, well established in physics and engineering. Just as the Reynolds number normalizes fluid flow to make behavior comparable across different pipe sizes and fluid viscosities, this indicator normalizes price movement by realized volatility, producing a dimensionless space we call the Isotropic Coordinate System (ICS).
In ICS, price is expressed in natural logarithmic form and scaled by a volatility estimate (σ) derived from the Yang-Zhang (2000) method — a drift-invariant estimator that incorporates Open, High, Low, and Close data. The resulting vertical axis is dimensionless: one unit equals one standard deviation of recent realized price behavior. When trend angles are measured in this space, 45° indicates approximately one σ of movement per bar — whether the chart shows a penny stock, a major currency pair, or a commodity index.
Traditional chart coordinates assign no fixed relationship between the price axis and the time axis. Resizing the chart window changes the visual slope of the same price movement — a compressed view may show 52° while a stretched view of the same data shows 25°. The angle is a display artifact, not a market property. The Isotropic Coordinate System (ICS) addresses this by normalizing log-price by realized volatility (σ). In this space, the trend angle is designed to remain constant regardless of how the chart is displayed — because it measures price displacement in units of σ per bar, not in pixels per pixel.
🔷 HOW THE MODULES WORK TOGETHER
ST-EP06 operates as a deterministic pipeline where each stage consumes the output of the one before it:
Realized volatility estimation (σ) → Structural block construction → Monotonic direction detection → ICS angle measurement → Channel boundary fitting → Six-scale parallel analysis → Consensus aggregation → Breakout and retest state tracking → Dashboard narrative generation
The Yang-Zhang σ provides the normalization constant for every downstream computation. Price history is then partitioned into structural blocks, each distilled to a single central tendency that resists close-price bias. Consecutive block centers are compared to identify the longest uninterrupted directional segment. The slope of that segment, measured in σ-normalized space, yields the ICS angle. Four price extremes located within the segment define two log-linear channel boundaries. This complete pipeline runs independently at six temporal scales, and their independent outputs are aggregated into a structural consensus. A finite-state machine then tracks the evolving relationship between price and the primary channel — breakout, retest, confirmation, or failure — and translates it into a single-line human-readable narrative.
ST-EP06 operates as a deterministic sequential pipeline. Yang-Zhang volatility (σ) provides the normalization constant that flows into every downstream stage. Price history is partitioned into structural blocks, each reduced to a geometric mean. The longest monotonic segment determines direction, and its slope in σ-normalized space yields the ICS angle. Four price extremes define the channel boundaries. This complete pipeline runs independently at six scales — 3, 7, 13, 19, 29, and 47 bars per block — all prime numbers, chosen to minimize harmonic overlap so that multiple scales are unlikely to lock onto the same cyclical artifact. Scale 19 (highlighted) serves as the primary engine: it is the only scale that maps to the user's Trend Block Period input, and the only scale whose output drives the chart-overlay channel lines, the projection, the diamond markers, and the breakout/retest state machine. The other five scales operate at fixed periods and contribute exclusively to the cross-scale consensus count — providing structural context that a single scale cannot offer alone. When 5 or 6 of the 6 scales agree on direction, it suggests a structural trend visible across a broad range of temporal resolutions.
🔷 DATA ANCHORING
Every structural computation in ST-EP06 — volatility, block means, direction, channel coordinates, state machine transitions, and dashboard narrative — is governed by a single anchoring reference, selected through the Calculation Bar input.
Live Bar mode (default): the anchor is the current forming bar. Values update with each incoming tick. This is standard TradingView behavior and means the indicator may exhibit intra-bar repaint — the live bar's data enters all computations as it evolves.
Close Bar mode: the anchor shifts to the last fully confirmed (closed) bar. The forming bar is excluded from every computation. Values lock once a bar closes and do not change retroactively. This mode is intended for structural analysis, back-testing, and any workflow where historical consistency is a priority.
One deliberate exception is maintained in both modes: the dashboard header always displays the current live closing price (Live Exception protocol), preserving real-time price awareness regardless of how the indicator's structural engine is anchored.
Two modes, same chart moment. In Live Bar the anchor sits on the forming bar, so every value updates tick-by-tick and may repaint within the bar. In Close Bar the anchor shifts to the last closed bar, locking all structural values once the bar closes. The only exception is the dashboard header row, which always displays the live closing price in both modes, so real-time price awareness is never lost.
🔷 YANG-ZHANG VOLATILITY (σ)
The foundation of the ICS is a robust volatility estimate. ST-EP06 uses the Yang-Zhang (2000) realized volatility estimator, an academically established method that combines three variance components:
Overnight variance — capturing the gap between consecutive sessions, measured from the prior close to the current open.
Intraday variance — capturing the movement from open to close within each session.
Range-based variance — using the Rogers-Satchell (1991) estimator, which extracts additional information from the high and low prices without assuming zero drift.
These three components are blended using an optimal weight that is designed to minimize estimation error. The resulting σ updates every bar, adapts to changing market conditions, and — crucially — is drift-invariant: it is intended to remain unbiased whether the market is trending strongly or mean-reverting.
🔷 BLOCK CONSTRUCTION
Rather than analyzing individual bars, ST-EP06 partitions recent price history into consecutive non-overlapping blocks. Each block spans a user-defined number of bars (the Trend Block Period input) and is reduced to a single representative value: the geometric mean of the block's highest high and lowest low, computed in logarithmic space.
This log-midpoint serves as the block's central tendency. Unlike a simple average of closing prices, it captures the structural center of the entire price range within the block, avoiding bias toward any single price point. The number of consecutive blocks compared is controlled by the Trend Block Groups input — more groups means deeper lookback and the ability to detect longer structural trends.
Price history is partitioned into consecutive non-overlapping blocks. Each block reduces to a single log-midpoint — the geometric mean of its highest high and lowest low. Connecting the midpoints forms the representative chain used for trend detection.
🔷 DIRECTION DETECTION + ICS ANGLE
Once blocks are constructed, the engine compares their geometric means in sequence, starting from the most recent. It identifies the longest consecutive segment where each block's central tendency moves in the same direction — either consistently rising or consistently falling. A single reversal terminates the segment.
The slope of this segment is then measured in ICS space: the logarithmic price difference between the oldest and newest blocks in the segment, divided by σ, divided by the number of bars between them. The arctangent of this normalized slope produces the ICS angle in degrees.
If the absolute angle falls within the Range Threshold (a user-configurable dead zone in degrees), the direction is classified as ranging rather than trending. This threshold acts as a sensitivity filter — wider values require steeper moves before declaring a trend, narrower values respond to subtler directional shifts.
An ICS angle of 45° indicates approximately one σ of price movement per bar. An angle near 0° suggests the market may be structurally flat. Because σ adjusts for volatility and the logarithm adjusts for price level, these angles are intended to be directly comparable across any instrument and any timeframe.
🔷 CHANNEL FITTING
Within the identified trending segment, the engine locates four price extremes: the highest high, the lowest high, the highest low, and the lowest low — each paired with its bar position. These four points define two linear boundaries in ICS space.
During an uptrend, the upper boundary is fitted through the lowest high and highest high (capturing the rising ceiling), while the lower boundary is fitted through the lowest low and highest low (capturing the rising floor). During a downtrend, the fitting order reverses to capture descending structure. During a ranging market, the channel uses horizontal boundaries at the segment's absolute high and low.
All boundary computations occur in the σ-normalized logarithmic coordinate system, meaning the channel lines represent geometric (log-linear) paths in price space — curves that naturally follow multiplicative price behavior rather than additive assumptions.
Within the trending segment, four extremes — HH, LH, HL, LL — define two log-linear boundaries. In an uptrend, the upper line fits through LH and HH, the lower through LL and HL. The direction reverses the fitting order for downtrends, and a ranging market uses horizontal boundaries.
🔷 6-SCALE PARALLEL ANALYSIS
A single temporal scale may capture the trend at one resolution but miss structure at others. ST-EP06 runs the complete pipeline — volatility normalization, block construction, direction detection, ICS angle, and channel fitting — independently at six different scales: 3, 7, 13, 19, 29, and 47 bars per block. These values were chosen as prime numbers to minimize harmonic overlap between scales.
Scale 19 serves as the primary engine and maps to the user's Trend Block Period input. The other five scales use fixed periods, providing a structural context that the primary engine alone cannot offer.
The dashboard displays each scale's independent trend direction. A consensus count shows how many of the six scales agree: 5/6 or 6/6 agreement suggests a structural trend that is visible across multiple temporal resolutions, while low agreement may indicate transitional or conflicting structure.
🔷 BREAKOUT / RETEST STATE MACHINE
ST-EP06 includes a 5-state finite automaton that tracks price's structural relationship to the primary channel boundaries:
Inside — price is observed between the channel floor and ceiling. The dashboard shows the position as a percentage: distance from floor and distance to ceiling (summing to 100%).
Breakout Up / Breakout Down — price has exited above the ceiling or below the floor. The dashboard shows the breakout price and the percentage of channel width that price has moved beyond the boundary.
Retest Up / Retest Down — after a breakout, price has moved at least one σ away from the boundary (establishing distance), then returned to test it. The dashboard shows both the original breakout price and the current retest level.
Transitions between states use dynamic σ-based thresholds rather than fixed percentages, meaning the sensitivity automatically adjusts with market volatility. Additional flags track:
✓ Confirmed — a breakout that has been retested and bounced at least one σ away from the boundary.
(gap) — price crossed the entire channel width in a single transition.
Failed breakout — price re-entered the channel after initially breaking out.
Direction reset — the primary trend direction changed, wiping all breakout state.
🔷 VISUAL TOOLS
All chart-overlay elements are drawn from the primary engine (scale 19):
Channel lines — solid upper and lower boundaries from the segment start to the anchor bar, colored by trend direction (configurable up/down/range colors, width, and line style).
Projection lines — dotted forward extension of the channel slopes beyond the anchor bar, providing a visual reference for potential future support and resistance. The projection offset, width, and style are independently configurable.
Channel fill — semi-transparent shading between channel boundaries, with independent color selection and adjustable transparency. Applies to both the solid channel and projection segments.
Diamond markers (◆) — placed at the channel endpoints on the anchor bar. Hovering reveals a tooltip with the anchored close price, ceiling level, floor level, and the price's position as a percentage of channel width.
Direction label — positioned at the midpoint between segment start and projection end. Displays the trend arrow, direction text, and ICS angle (e.g., "▲ UP +7.3°"). Tooltip includes block count.
🔷 DASHBOARD
A compact information table appears at the top-right corner of the chart, organized in 5 rows:
Header — indicator name, ticker symbol, timeframe, and live price (always live under the Live Exception protocol, even in Close Bar mode).
Period — the six scale values (3, 7, 13, user's period, 29, 47) displayed across columns. The primary engine column is highlighted.
Trend — per-scale trend direction with directional arrows (▲ UP, ▼ DN, ◈ RNG) and color coding.
Agreement — consensus count (e.g., "5/6 UP") with the primary channel ceiling (▲) and floor (▼) price levels.
Narrative — a single merged row presenting the breakout/retest state machine output as a human-readable sentence with distance measurements. This row updates dynamically as price interacts with the channel.
All dashboard text, tooltips, and narrative phrases are fully localized.
🔷 ALERT CONDITIONS
ST-EP06 provides 19 alert conditions organized in 5 categories, all gated by a master Enable Alerts toggle:
D · Direction (3 alerts) — fires when the primary engine trend changes to uptrend, downtrend, or range.
B · Breakout (4 alerts) — fires on initial breakout above ceiling or below floor, and separately on confirmed breakout (retested and bounced).
R · Retest (2 alerts) — fires when price returns to test the boundary after establishing distance.
S · Structural (5 alerts) — fires on gap-through events (price crosses entire channel), failed breakouts (price re-enters channel), and direction resets (trend change wipes state).
A · Agreement (5 alerts) — fires when cross-scale consensus reaches significant thresholds: full bullish (6/6), strong bullish (5/6), full bearish (6/6), strong bearish (5/6), or range consensus (≥4/6).
Important: alerts require Calculation Bar = Live Bar. In Close Bar mode, all alert conditions are automatically suppressed and a visual warning is displayed on the chart — because Close Bar mode intentionally lags by one bar, which is semantically incompatible with live alert delivery.
🔷 LANGUAGE SUPPORT
The dashboard, all tooltips, the breakout/retest narrative, and the alert warning label are available in 7 languages:
English · Türkçe · العربية · Русский · Italiano · Português (BR) · 中文
Select the preferred language from the Language dropdown in the Display settings group. All structural and numerical outputs remain unchanged — only the display language of text elements is affected.
🔷 HOW TO USE
Apply ST-EP06 to any chart — the indicator is designed to work across instruments (equities, forex, crypto, commodities, indices) and timeframes without parameter re-optimization, because the ICS framework normalizes for volatility and price level automatically.
Start with the default settings (Period 26, Groups 5, Sigma Length 20) and observe how the channel captures the dominant structural trend. The 6-scale consensus in the dashboard may help assess whether the observed trend is isolated to one temporal resolution or confirmed across multiple scales.
The Calculation Bar setting is a structural decision: use Live Bar for real-time monitoring and alert-driven workflows; use Close Bar for analysis and back-testing where historical stability is prioritized.
The ICS angle on the direction label provides a quantitative measure of trend intensity. Comparing angles across different instruments or timeframes is one of the intended use cases of the ICS framework — a 15° angle on one chart and a 15° angle on another may suggest similar structural momentum relative to each market's own volatility.
The breakout/retest narrative in the dashboard bottom row is designed to provide context-rich status updates without requiring manual chart reading. The σ-based thresholds ensure that breakout sensitivity adapts to current market conditions rather than relying on fixed values.
🔷 SETTINGS
Calculation — Calculation Bar (Live/Close Bar anchoring), Trend Block Period (bars per block), Trend Block Groups (consecutive blocks compared), Range Threshold (ICS dead zone in degrees), Yang-Zhang Sigma Length (volatility lookback).
Channel Lines — Up Color, Down Color, Range Color, Line Width, Line Style.
Projection Lines — Projection Offset (forward bars), Projection Width, Projection Style.
Display — Language (7 options), Show Channel (toggle overlay), Show Fill (toggle shading), Show Dashboard (toggle table), Dashboard Font Size.
Channel Fill — Fill Up Color, Fill Down Color, Fill Range Color, Fill Transparency.
Alerts — Enable Alerts (master toggle, requires Live Bar mode).
🔷 DISCLAIMER
ST-EP06 is an educational and analytical tool. It is designed to provide structural context through σ-normalized trend channels and multi-scale analysis. It does not generate buy or sell signals, does not predict future price movement, and is not intended as financial advice. Historical patterns observed through this indicator do not guarantee future outcomes. All trading decisions remain the sole responsibility of the trader.
AI Predictive Flow (Zeiierman)█ Overview
AI Predictive Flow (Zeiierman) is a pattern-based oscillator that estimates future price direction by comparing the current market state to similar historical conditions.
Instead of relying on traditional indicators like momentum or moving averages alone, the script builds a multi-feature representation of price behavior and uses a k-Nearest Neighbors (kNN) model to identify past patterns that closely resemble the present.
From those matches, it derives an expected forward return, which is then transformed into a smooth oscillator and a predicted trend regime.
The result is a forward-looking signal that reflects a data-driven expectation based on similar past patterns, not just current price movement.
█ How It Works
⚪ Feature Extraction (Market State Model)
The script converts price into a compact feature set that describes the current market state.
It uses four core features:
Short-term return
Momentum
RSI bias
EMA spread
These are created inside the feature function:
feat(shift, mode) =>
c = close
c1 = close
cm = close
ef = ta.ema(close, fLen)
es = ta.ema(close, sLen)
r = ta.rsi(close, rsiLn)
float v = 0.0
if mode == 1
v := c1 != 0 ? math.log(c / c1) : 0.0
else if mode == 2
v := cm != 0 ? (c - cm) / cm : 0.0
else if mode == 3
v := (r - 50.0) / 50.0
else
v := c != 0 ? (ef - es) / c : 0.0
v
Each feature captures a different dimension of price behavior:
return measures immediate movement
momentum measures directional displacement
RSI bias measures internal pressure
EMA spread measures trend structure
These values are then stacked across multiple bars to form the pattern used for comparison.
⚪ Pattern Memory (Historical Pattern Library)
The script stores rolling sequences of each feature into separate matrices so the current market state can be compared against past states.
That process is built here:
pushFeat(mat, mode) =>
vals = array.new(tot, 0.0)
for i = 0 to tot - 1
array.set(vals, tot - 1 - i, feat(i, mode))
cur = array.slice(vals, tot - len, tot)
old = array.slice(vals, 0, len)
matrix out = matrix.new(1, len, 0.0)
for i = 0 to len - 1
matrix.set(out, 0, i, array.get(cur, i))
hist = array.new(len, 0.0)
for i = 0 to len - 1
array.set(hist, i, array.get(old, i))
if mat.rows() >= mem
mat.remove_row(0)
mat.add_row(mat.rows(), hist)
out
This creates:
a current feature row
a rolling history of prior feature patterns
So rather than comparing single-bar values, the model compares multi-bar pattern structure.
⚪ Pattern Matching Engine (kNN Distance Model)
Once the current feature pattern is built, it is compared to all stored historical patterns.
Distance is measured feature-by-feature across the full pattern length:
getDist(matrix a1, matrix a2, matrix a3, matrix a4, matrix b1, matrix b2, matrix b3, matrix b4) =>
out = array.new(b1.rows(), 0.0)
for i = 0 to b1.rows() - 1
s = 0.0
d1 = a1.diff(b1.submatrix(i, i + 1)).row(0)
d2 = a2.diff(b2.submatrix(i, i + 1)).row(0)
d3 = a3.diff(b3.submatrix(i, i + 1)).row(0)
d4 = a4.diff(b4.submatrix(i, i + 1)).row(0)
for j = 0 to len - 1
s += math.pow(d1.get(j), 2) * 0.25 +
math.pow(d2.get(j), 2) * 0.25 +
math.pow(d3.get(j), 2) * 0.25 +
math.pow(d4.get(j), 2) * 0.25
out.set(i, math.sqrt(s))
out
This produces a similarity score for every stored pattern. A smaller distance means the past setup looked more like the present one.
⚪ Prediction Model (kNN Forward Expectation)
After the distances are ranked, the script selects the nearest neighbors and averages their future outcomes.
The kNN model is implemented here:
knn(dist, n) =>
ix = dist.sort_indices()
useN = math.min(n, ix.size())
sumD = 0.0
avg = 0.0
for i = 0 to useN - 1
sumD += dist.get(ix.get(i))
if useN > 0
for i = 0 to useN - 1
d = dist.get(ix.get(i))
w = useN > 1 ? (sumD != 0 ? (1 - d / sumD) : 1.0) : 1.0
avg += Y.get(ix.get(i)) * w
avg
The forward return used for comparison is defined here:
y := math.log(base) - math.log(base )
This represents the forward return following each historical pattern. The result is a weighted expectation of future movement, not just a reading of current trend.
⚪ Predictive Oscillator
The raw kNN prediction is smoothed and transformed into the main oscillator and signal line.
pred_ = ta.ema(pred, smth)
if not na(pred)
predSm := smth > 1 ? pred_ : pred
osc = ta.ema(predSm, oscLn)
sig = ta.ema(osc, sigLn)
hist = osc - sig
This creates:
Oscillator = smoothed expected return
Signal line = secondary smoothing for crossover confirmation
Histogram = distance between oscillator and signal
⚪ Predicted Trend Regime
Beyond the oscillator, the script also builds a broader trend regime using the predicted price path.
First, the raw prediction is converted into a projected price line:
predLine := base + base * (math.exp(pred) - 1)
Then a regime band is created using ATR:
hiRef = predLine + bandM * atr
loRef = predLine - bandM * atr
if ta.highest(hiRef, regLn) == hiRef
trendUp := true
if ta.lowest(loRef, regLn) == loRef
trendUp := false
This background state represents:
bullish predicted regime when the projected path is pressing into new highs
bearish predicted regime when the projected path is pressing into new lows
So the background is not showing the raw price trend. It is showing the model’s predicted regime bias.
█ How to Use
⚪ Read the Oscillator
Above 0 → bullish expectation
Below 0 → bearish expectation
Near 0 → neutral/low conviction
Far from 0 → strong directional push
Use crossovers for entry timing:
Bullish crossover → potential upward continuation
Bearish crossover → potential downward continuation
⚪ Use the Predicted Trend Regime
The background highlights the model’s broader directional bias:
Green → predicted bullish regime
Red → predicted bearish regime
Regime shifts often indicate:
early trend transitions
continuation confirmation
structural changes in expectation
⚪ Combine Signals
Best use comes from alignment:
Oscillator above zero + bullish regime + signal → strong continuation bias
Oscillator below zero + bearish regime + signal → strong downside bias
Divergence between the two → caution / mixed signals
█ Settings
Pattern Length – Controls how many bars define the current pattern. Higher values capture more structure, lower values increase responsiveness.
Memory Size – Number of historical patterns stored for comparison. Larger values improve context but increase computation.
Neighbors (k) – Number of closest matches used in prediction. Lower values are more reactive, higher values are smoother.
Prediction Smoothing – EMA smoothing applied to the raw prediction. Reduces noise at the cost of lag.
Signal Length – Smoothing of the signal line used for crossover signals.
-----------------
Disclaimer
The content provided in my scripts, indicators, ideas, algorithms, and systems is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any financial instruments. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
Focus Bars [Kioseff Trading]Hello Traders!
🔹 Focus Bars
Focus Bars is a lower-timeframe reconstruction tool designed to break each candle into a price-based internal structure .
Instead of viewing a bar as a single OHLC print, this tool redistributes intrabar participation across price levels, showing where activity, delta, and directional pressure concentrated inside the bar itself .
Think of it as a way to look inside the candle .
intrabar participation distributed by price level
buy vs sell pressure mapped inside each bar
delta-driven visualization of internal structure
volume-based or delta-based profile sizing
stacked recent bars for direct comparison
lower timeframe reconstruction of candle internals (up to 1 tick)
🔹 What the tool shows
🔸 Focus Bar Structure
Each visible bar is reconstructed using lower timeframe data and divided into configurable price rows.
This allows the script to build an internal map of activity inside the candle, showing how participation distributed throughout its range.
This helps reveal:
where activity concentrated inside the bar
which price regions attracted the most interaction
how the bar built from low to high
🔸 Directional participation
The script estimates directional pressure using lower timeframe price movement and distributes that pressure across the bar’s traded range.
This allows you to observe:
where buying pressure was strongest
where selling pressure dominated
how directional activity distributed through the candle
Instead of treating the candle as one net result, Focus Bars breaks it into a layered participation structure .
🔸 Volume mode
In its default form, the profile width reflects total intrabar participation at each price level.
This helps identify:
high activity zones inside the bar
areas where the market spent more effort
internal high-interest regions
This mode focuses on where the bar traded most actively , regardless of which side was dominant.
🔸 Delta Bars mode
When Delta Bars mode is enabled, the visualization shifts from general activity to directional imbalance .
Positive delta levels extend one way, while negative delta levels extend the other, helping expose where directional pressure accumulated inside the bar.
This makes it easier to see:
which prices were dominated by buyers
which prices were dominated by sellers
where internal imbalance became most extreme
This mode is about pressure and imbalance , not just participation.
🔸 Recent bar stacking
The script displays multiple recent reconstructed bars side by side, allowing you to compare internal structure across the most recent candles.
This helps reveal:
whether participation is shifting higher or lower
whether recent bars are building similarly or differently
how internal pressure changes from one bar to the next
Rather than looking at candles in isolation, you get a stacked structural view of recent bar development.
🔸 Price-row resolution
Each bar is divided into a configurable number of rows.
Higher row counts provide finer structural detail, while lower row counts simplify the visualization.
This lets you control the balance between:
detail
clarity
performance
🔸 Lower timeframe reconstruction
The script uses lower timeframe data to estimate how participation distributed through each candle.
Granularity can be selected between:
1-minute
1-second
1-tick
This allows the internal structure to become more detailed as lower granularity data becomes available.
🔸 Buy / sell volume labels
Each price row includes separate displayed values for:
sell-side participation
buy-side participation
This gives a direct read on how activity distributed at each level, rather than relying only on color or profile width.
🔸 Gradient-based intensity
Color gradients help represent the magnitude of participation and directional pressure at each price level.
This makes it easier to spot:
high-intensity zones
low-interest areas
strong directional concentrations
Stronger color intensity reflects stronger internal participation or imbalance.
🔹 How to read it
Each component gives a different layer of information:
Candle body / wick → the outer structure of the bar
Profile width → where participation concentrated
Delta mode → where directional imbalance built
Buy / sell labels → how each side contributed at a level
Stacking → how internal structure changes bar to bar
🔹 Why this tool is useful
It gives you:
a way to look inside candles instead of only at candle outcomes
price-based intrabar participation mapping
clear visualization of internal volume and delta structure
context for where buying or selling pressure concentrated
a deeper structural view of recent bar development
🔹 Best use cases
analyzing internal candle structure
comparing recent bars side by side
spotting hidden participation concentrations
finding where directional pressure built inside a move
adding lower-timeframe context to bar-by-bar analysis
🔹 Important note
This tool uses lower timeframe data to reconstruct intrabar structure.
This means:
it is an approximation of internal order flow
accuracy depends on available lower timeframe data
selected granularity impacts precision
different symbols and data feeds may produce different levels of detail
🔹 Inputs you can customize
The script includes flexible controls such as:
granularity selection
bar count to display
row resolution
volume mode vs Delta Bars mode
color customization
display offset
Closing Notes
Focus Bars is built to shift the focus from how a candle finished to how it developed internally .
It helps reveal not just what the bar looked like from the outside, but where participation and pressure were concentrated inside it .
Thank you for checking it out!
Carrier Volatility [Pumori]Carrier Volatility
This is the foundational Pulse component of the ET Massif Framework research suite.
Description
Pumori is a high-resolution volatility and impulse response tool built around an ultra-short fractional length (0.1 EMA). It is a high-frequency carrier framework that exposes the formation of volatility through controlled instability rather than smoothing. Unlike traditional indicators that smooth or lag volatility, Pumori captures high-frequency energy, allowing volatility to be observed in near real-time as it forms.
Construct
At its core, Pumori uses:
Dual 0.1-length EMA
A sub-unit length (N < 1) is intentionally used to produce an anti-smoothing response, where the recursive term overreacts to incoming data and amplifies micro-movements. The EMA is applied twice recursively, producing a controlled oscillatory response. This interaction forms the carrier layer, where continuous oscillation exposes high-frequency volatility directly.
Flexible source input (RSI, RSI SMA, close, custom)
Three default source modes are available, allowing Pumori to operate across different domains. RSI is set as the default carrier as it represents normalized momentum in a bounded range, providing a stable domain for the transform. The chosen source defines how the carrier behaves and directly influences stability, noise profile, and interpretability.
Volatility Envelope
The recursive overshoot–correction cycle forces continuous oscillation around the source, forming a dynamic envelope that expands and contracts with volatility. Pumori does not measure volatility, it reveals volatility formation.
Think of Pumori like an AM radio carrier wave. The point is not signal transmission — it is that the carrier must operate at a high enough frequency for changes to become visible immediately. Most traditional volatility measures operate over a fixed lookback window, which makes them inherently lagging. Pumori Instead, it allows volatility to express immediately. The 0.1 EMA acts as a high-frequency baseline upon which expansion and contraction are directly observed.
Modes
Default
RSI is the default baseline configuration. because it provides a bounded and naturally oscillatory structure (0–100), allowing the carrier to behave in a stable and interpretable manner. Unlike price, which can expand unpredictably, RSI compresses extremes and standardizes movement, making volatility expansion and contraction easier to observe. The result is a clean carrier waveform that offers the best balance between responsiveness and readability.
Diagnostic
RSI (SMA) , the simple moving average of RSI is applied to the carrier transform. This reduces internal jitter while preserving underlying structure. It is used to assess movement quality, separating clean, controlled trends from noisy or chaotic trends.
Price
Applying it on price produces a highly reactive output where volatility expansion and contraction are expressed in a zig-zag band around price. The oscillations reflect immediate changes in movement and make volatility clustering visible within trend.
How to Use
Volatility Gauge
Band expansion indicates an active volatility state.
Band contraction indicates a suppressed environment
No volatility = no opportunity.
Market Progression
A low-volatility environment facilitates smooth, steady price progression. In contrast, high-volatility states produce a chaotic path characterized by erratic movement and uneven progression, signaling potential structural instability and reduced directional efficiency.
Future Development
Fractional EMA on price is further planned to be used as input component for adaptive filtering systems (e.g., Kalman filter integration). Specifically, the high-frequency oscillations of the volatility band provides a direct proxy for noise measurement, allowing dynamic adjustment of model responsiveness without introducing lag.
Default Settings
EMA Lengths: 0.1, 0.1
Default Mode: RSI (stable carrier behavior)
Diagnostic Mode: RSI SMA (reduced noise, structural clarity)
Price Mode: Close (Volatility clustering & envelop)
RSI Length: 14
RSI SMA Length: 14
日本語概要 (Japanese Summary)
Pumoriは、0.1という極短期間のEMA(指数平滑移動平均)を用いた高解像度なボラティリティ指標です。従来のATR(アベレージ・トゥルー・レンジ)のような遅行性の高い「平均型」ではなく、相場における瞬間的なボラティリティの拡張と収縮をリアルタイムに捉えることを目的としています。
主な特徴:
高精度なバンド形成: 価格データまたはRSIをベースに、ジグザグ状のボラティリティバンドを形成します。
ボラティリティの「発生」を直感的に把握: 従来の指標では見逃されがちな、微細な価格変化の初動(ボラティリティの発生)を直接観測可能です。
市場の質を識別: 相場が「滑らかなトレンド」にあるのか、あるいは「ノイズの多い不安定な状態」にあるのか、その動きの質を瞬時に判別できます。
中文概要(Chinese Summary)
Pumori 是一個基於 0.1 極短期 EMA 的高解析度波動率工具,旨在即時捕捉市場波動的擴張與收縮。不同於傳統 ATR 等滯後性的「平均型」指標,Pumori 能在第一時間反應波動的動態變化。
核心特點:
動態鋸齒型波動帶: 可靈活作用於價格、RSI 或平滑後的 RSI,形成緊隨走勢的動態帶狀區域。
捕捉「波動生成」: 直接觀測波動的初動與爆發,而非事後進行數值平均
高靈敏度動能偵測: 對市場動能變化極為敏感,能有效區分趨勢的平滑程度與市場雜訊(Noise)。
主要用途:
市場環境判斷: 辨識當前是否具備交易條件(穩定趨勢 vs. 混亂無序)。
波動品質分析: 評估市場走勢的品質,區分健康的波動與無效的雜訊。
進階算法基礎: 作為未來卡爾曼濾波器(Kalman Filter)進行波動調節的核心基礎。
Disclaimer:
This script is a research tool for market structure analysis and educational purposes only. It does not constitute financial advice. Trading involves risk.
Multi Timeframe Volume Profiles [TradingIQ]Hello Traders!
🔹 Multi-Timeframe Volume Profiles
Multi-Timeframe Volume Profiles is a visualization tool designed to show how volume and participation develop across multiple timeframes - all in one view.
Instead of switching between charts and trying to mentally piece together context, this tool lets you see how lower timeframe activity builds into higher timeframe structure.
It focuses on answering a more practical question:
Where is price being accepted… and how does that agreement change across timeframes?
volume distribution across multiple timeframes
agreement or conflict between structures
where price is accepted vs rejected
how lower timeframe activity builds higher timeframe moves
who is in control across different horizons
🔹 What the indicator shows
🔸 Stacked multi-timeframe profiles
Each profile represents a different timeframe and is displayed directly on the chart.
This allows you to instantly see:
how multiple timeframes align at key price levels
where value is overlapping (high acceptance)
where structure begins to diverge
Instead of flipping between charts, everything is visible at once.
🔸 Traditional Volume Profile (standard model)
This mode shows total traded volume at each price level for every selected timeframe.
It helps identify:
point of control (POC)
value area high (VAH)
value area low (VAL)
high participation zones
This is the classic way of understanding where trading activity accumulated.
🔸 Delta Profile (aggressive participation)
Delta mode shows the difference between buying and selling pressure at each price level.
This allows you to see:
which side was more aggressive
where buyers or sellers dominated
whether control is shifting over time
This adds a layer of intent behind the volume.
🔸 Multi-timeframe structure alignment
By stacking multiple profiles, you can quickly identify:
areas where all timeframes agree (strong acceptance)
areas where lower timeframe moves oppose higher timeframe structure
decision zones where price may accept or reject
This shifts your perspective from:
“what is price doing?”
to:
“how does this move fit into the bigger picture?”
🔹 How to read it
Each timeframe adds a layer of context:
Lower timeframe → immediate structure
Higher timeframe → broader context
Together, they help answer:
is this move aligned with the bigger trend?
is this just a retracement inside a larger move?
are multiple timeframes accepting the same price?
Example interpretations:
aligned profiles → strong agreement / acceptance
misaligned profiles → potential conflict or reversal
lower timeframe strength vs higher timeframe weakness → possible fade
overlapping value areas → key decision zones
🔹 Why this indicator is useful
It gives you:
multi-timeframe context in a single view
clear visibility of agreement vs conflict
volume-based structure across different horizons
insight into where price is being accepted
a more complete view of market structure
🔹 Best use cases
multi-timeframe analysis
identifying key decision areas
understanding structure alignment
confirming or fading moves
enhancing volume profile strategies
🔹 Important consideration
This script uses lower timeframe data to construct profiles.
This means:
accuracy depends on available historical data
lower timeframe selection impacts results
extreme timeframe combinations may not work as expected
For best results, keep lower timeframe selections realistic relative to the chosen higher timeframe.
🔹 Inputs you can customize
The script includes flexible controls such as:
multiple timeframe selection (up to 5 profiles)
lower timeframe volume source
profile resolution (rows)
model type (volume or delta)
POC and value area labels
visual styling and colors
Closing Notes
This script is built to simplify multi-timeframe analysis - turning something that normally requires constant chart switching into a single, clear view.
It may receive updates based on feedback - stay tuned!
Thank you TradingView as always!
CVD Profiles [TradingIQ]Hello Traders!
🔹 CVD Profiles
CVD Profiles is a profile-based order flow visualization tool designed to show how participation distributes across price levels - not just over time, but through price itself .
Think volume profile data + TPO time segmenting!
Instead of looking at cumulative delta as a single line, this tool breaks it down into a price-based structure , revealing where activity, imbalance, and participation actually occurred within the session.
It focuses on answering a more important question:
Where did participation concentrate… and how did it distribute across price/time?
cumulative delta distributed by price level
buy vs sell activity mapped into profiles
imbalance and dominance across structure
value areas and point of control
activity concentration (volume, USD, or delta-based)
how participation builds within a session
🔹 What the tool shows
🔸 CVD Profile (price-based structure)
Instead of viewing delta as a time series, this tool distributes it across price levels - forming a profile of participation .
This allows you to see:
where buying pressure accumulated
where selling pressure dominated
which price levels attracted the most activity
🔸 Imbalance Ratio (dominance structure)
Imbalance mode shifts the focus from raw participation to relative dominance between buyers and sellers at each price level.
Each level reflects the ratio between buy and sell activity, highlighting where one side clearly outweighed the other.
This allows you to see:
where buyers strongly dominated sellers
where sellers overwhelmed buying pressure
areas of clear directional conviction
High imbalance levels often represent:
aggressive participation
momentum-driven behavior
one-sided control at specific prices
Balanced areas, on the other hand, suggest:
indecision
two-sided trade
lack of conviction
🔸 Activity Mode (participation intensity)
Activity mode focuses on how much trading activity occurred at each price level, regardless of direction.
Instead of separating buyers and sellers, this mode aggregates total participation to reveal:
high interest zones
areas of heavy interaction
where the market spent the most effort
This helps identify:
key auction areas
high liquidity regions
zones where price is likely to react
Low activity areas often indicate:
inefficient movement
thin liquidity
potential for fast price movement
This mode is about effort - not direction.
🔸 USD Volume Mode (capital-weighted activity)
USD Volume mode builds on activity by incorporating price-weighted participation .
Instead of just counting volume, it measures:
“where was the most capital traded?”
This highlights:
price levels with the highest notional value traded
areas of significant financial commitment
where larger participants may be involved
Compared to raw activity, this mode emphasizes:
higher-priced transactions
capital concentration rather than trade count
This is especially useful for:
spotting institutional interest
identifying meaningful participation zones
filtering out low-value noise
This mode is about capital — not just volume.
www.tradingview.com
🔸 Multiple profile models
The script supports different ways to interpret participation:
CVD → raw cumulative delta distribution
Imbalance Ratio → relative dominance (buy vs sell strength)
Activity → total participation intensity
USD Volume → capital-weighted activity
Each model answers a slightly different question about the market.
🔸 Value Area & POC
The tool automatically calculates:
Point of Control (POC) → highest participation level
Value Area High (VAH)
Value Area Low (VAL)
This helps identify:
fair value
high liquidity regions
areas where price is most accepted
These levels often act as key reference points for structure and reaction.
🔸 Initial Balance (IB)
The script tracks the initial balance range.
This highlights:
early session structure
range expansion vs containment
where price begins its auction
It provides context for how the session develops relative to its starting range.
🔸 Profile stacking (time progression)
Profiles are built over time and stacked horizontally, showing how participation evolves.
This allows you to observe:
shifts in dominance over time
expansion of participation into new price zones
whether activity is building or fading
Instead of a static snapshot, you get a dynamic structural progression .
🔸 Gradient-based intensity
Color gradients represent the magnitude of activity.
This helps highlight:
high participation nodes
low interest areas
extreme dominance zones
Stronger colors = stronger participation.
🔸 CVD Delta / Acceleration histogram
An off-chart histogram shows:
CVD Delta → change in participation
CVD Acceleration → change in momentum of participation
CVD Delta represents the amount of buying vs selling pressure added during the current bar.
In simple terms:
positive delta → more buying than selling
negative delta → more selling than buying
This tells you who was in control during that bar .
CVD Acceleration takes it one step further.
It measures how quickly delta itself is changing:
increasing acceleration → pressure is building
decreasing acceleration → pressure is slowing
sharp shifts → potential transitions in control
This helps answer a deeper question:
“Is participation just present… or is it expanding?”
Together, they give you a clearer read on:
whether buying/selling is increasing
whether momentum is building or fading
when participation is strengthening vs weakening
Think of it like this:
CVD Delta = current pressure
CVD Acceleration = change in pressure
Strong trends are often accompanied by:
consistent delta in one direction
positive acceleration early in the move
While weakening moves often show:
falling delta
negative or declining acceleration
🔹 How to read it
Each component provides a different layer:
Profile → where participation occurred
POC / VA → where value is established
Model selection → what type of participation you're measuring
Histogram → how participation is changing
🔹 Example interpretations
high activity at a level → strong interest / potential reaction zone
thin profile areas → low liquidity / fast movement zones
POC holding → acceptance
POC shifting → changing value
expanding profile → active auction
contracting profile → consolidation
🔹 Why this tool is useful
It gives you:
price-based participation mapping
clear visualization of where trading actually occurred
context for value and liquidity
insight into dominance and imbalance
a structural view of order flow instead of just time-based data
🔹 Best use cases
identifying key reaction levels
analyzing auction behavior
tracking value shifts across sessions
confirming strength or weakness at price
enhancing liquidity-based or structure-based strategies
🔹 Important note
This tool uses lower timeframe data to reconstruct participation.
This means:
it is an approximation of order flow
accuracy depends on available intrabar data
lower timeframe selection impacts precision
🔹 Important consideration
CVD and participation:
can drive price
can fail to move price
can be absorbed by opposing liquidity
Location matters just as much as magnitude.
🔹 Inputs you can customize
The script includes flexible controls such as:
profile model selection
lower timeframe input
profile resolution (tick size)
value area percentage
fixed start vs rolling sessions
color customization
histogram mode (delta vs acceleration)
Closing Notes
This tool is built to shift your perspective from time-based indicators to price-based participation analysis .
It helps you understand not just what the market did — but where it mattered most .
It may receive updates based on feedback - stay tuned!
Thank you TradingView as always!
Volume Bubbles [QuantAlgo]🟢 Overview
The Volume Bubbles indicator is a multi-layered volume cluster detection system that identifies statistically significant volume events directly on your price chart, classifying them by magnitude (Small, Medium, Big) and direction (Buy, Sell, Mixed). By combining adaptive percentile thresholds across multiple lookback windows with optional volume delta analysis, this indicator highlights moments of elevated trading activity that often signal institutional participation, trend acceleration, or potential reversals across every timeframe and market.
🟢 How It Works
The indicator begins by establishing a lower timeframe for volume delta calculation. When auto-select is enabled, it picks a granular timeframe based on your chart period, using 1-second bars for sub-minute charts, 1-minute bars for intraday charts, 5-minute bars for daily charts, and 60-minute bars for higher timeframes. This allows the indicator to estimate net buying and selling pressure within each chart bar:
= taLib.requestVolumeDelta(lowerTimeframe)
float netDelta = nz(lastDelta)
float absDelta = math.abs(netDelta)
The core detection engine then calculates percentile thresholds for both volume and absolute delta across three independent lookback windows (Short, Medium, Long). Each window computes its own threshold for each cluster tier using linear interpolation:
float vSmallShort = ta.percentile_linear_interpolation(volume, shortLen, smallPct)
float vSmallMid = ta.percentile_linear_interpolation(volume, midLen, smallPct)
float vSmallLong = ta.percentile_linear_interpolation(volume, longLen, smallPct)
This means a bar's volume is not compared against a single average but ranked against the full distribution of recent volume history from multiple perspectives. A Small cluster must exceed the 75th percentile (top 25%), a Medium cluster the 90th percentile (top 10%), and a Big cluster the 97th percentile (top 3%) by default.
To filter noise, a consensus system requires agreement across the lookback windows before confirming a cluster:
f_consensus(bool pS, bool pM, bool pL, string mode) =>
int hits = (pS ? 1 : 0) + (pM ? 1 : 0) + (pL ? 1 : 0)
switch mode
"Any Window" => hits >= 1
"Majority (2 of 3)" => hits >= 2
"All Windows (strictest)" => hits >= 3
In Majority mode, for example, at least two of the three windows must agree that volume exceeds the threshold before a cluster is plotted. This prevents false signals from temporary spikes that look significant in one context but not another.
Once a cluster is confirmed, it is classified as Buy, Sell, or Mixed based on the selected method. Candle Direction uses the bar's open/close relationship, Delta Direction uses the sign of net volume delta, and Both requires agreement between the two, labeling any conflict as Mixed.
🟢 Key Features
▶ The indicator offers four detection methods, each designed to balance sensitivity and precision depending on data availability and trading style.
1. Volume Only: Uses raw bar volume as the sole input for cluster detection. This is the simplest and most universal mode, working on any symbol that provides volume data. It identifies all statistically elevated volume events regardless of whether buying or selling dominated, making it useful for spotting general activity surges around key levels, news events, or session opens.
2. Delta Only: Uses the absolute value of net volume delta instead of total volume. This mode triggers only when directional pressure (not just raw activity) is statistically elevated. It filters out high-volume bars where buying and selling were roughly balanced, focusing instead on bars where one side clearly dominated. Requires lower timeframe data availability.
3. Volume + Delta: Both volume and delta must independently exceed their respective percentile thresholds. This is the strictest detection mode. A cluster only appears when there is both unusually high total activity and unusually strong directional flow, filtering out ambiguous bars where volume was high but evenly split between buyers and sellers.
4. Volume OR Delta: Either elevated volume or elevated directional delta triggers a cluster. This is the most inclusive mode, capturing both pure volume events (such as index rebalancing or option expiration activity) and strong directional surges that may occur on relatively normal total volume. Best suited for traders who prefer broader coverage and are comfortable filtering signals with additional context.
▶ Detailed Tooltip Overlay: Hovering over any bubble reveals a comprehensive diagnostic panel summarizing the full context behind that cluster. The tooltip displays the cluster tier and direction label (e.g., BIG BUY or MEDIUM SELL), the formatted volume value, net delta value (or "n/a" if delta data is unavailable), the volume-to-average ratio expressed as a multiple, the active detection method (with a fallback note if delta was unavailable and the method defaulted to Volume Only), the individual window confirmations for both volume and delta shown as a compact S M L grid indicating which of the short, medium, and long lookback windows passed their threshold, and the classification mode used to determine the buy/sell label. This gives full transparency into exactly why each cluster was detected and how it was classified, without cluttering the chart itself.
▶ Built-in Alert System: Pre-configured alert conditions for Big clusters, Medium-or-larger clusters, and any cluster detection, allowing you to receive notifications for the volume events that matter most to your strategy.
▶ Visual Customization: Choose from 5 color presets (Classic, Aqua, Cosmic, Cyber, Neon) or define your own custom color scheme. Optional in-bubble text displays volume, delta, ratio, or combinations, while the tooltip diagnostic panel remains accessible on hover regardless of whether bubble labels are enabled or disabled.
🟢 Important Notes
1. This indicator requires volume data to function. Make sure you are using a ticker from an exchange that provides volume data. Symbols that do not report volume (such as certain forex pairs on specific brokers or custom-built indices) will trigger a warning message on the chart and produce no signals. If you see the "No Volume Data" warning, switch to a symbol or exchange that supports volume reporting.
2. Whether you are scalping on lower timeframes or swing trading on daily and weekly charts, Volume Bubbles is designed to complement your existing setup rather than replace it. Use it as a confirmation layer alongside your preferred strategy to identify when statistically significant volume activity aligns with your trade thesis, adding a data-driven edge to entries, exits, and key level analysis across any timeframe and market.
Monte Carlo CT [SS]This is the Monte Carlo CT indicator.
CT stands for "central tendencies" and is the real distinguishing characteristic of this indicator against other Monte Carlo based indicators.
In statistics, Central Tendency is a single value that attempts to describe a set of data by identifying the central position within that set. It is the typical or expected value that the data clusters around. While the most common measures are the mean (average), median (middle value), and mode (most frequent), in a Monte Carlo simulation, the central tendency acts as the gravity points of the forecast. Because a random walk can technically produce infinite paths, with some shooting to the moon and others crashing to zero, the central tendency filters out those wild outliers to show you the most mathematically probable path forward.
Instead of looking at the chaos of 200 individual spaghetti lines, the central tendency condenses that massive dataset into a clean, usable trajectory. It essentially represents the path of least resistance based on the historical volatility and drift the model has identified. By focusing on the median and its surrounding percentiles, you are shifting your perspective from "What could happen?" to "What is likely to happen?"
Now that we have that cleared up, lets talk more about the indicator and its components.
The Core Engine: Anchored Monte Carlo
Traditional Monte Carlo simulations often generate a spaghetti chart of thousands of lines that are visually overwhelming and practically unusable for a trader.
As we discussed above, this indicator uses Central Tendencies to solve that. Instead of showing every random path, it runs the simulations in the background and only plots the distribution percentiles.
Why Central Tendency > Raw Simulations?
The Median (White Line): Represents the average outcome. If you ran these simulations infinitely, this is the center of the bell curve.
The 75/25 Zones (Solid Green/Red): These are the standard volatility bounds. Price spent 50% of its simulated time within this corridor.
The 95/05 Bounds (Dashed Green/Red): These represent "Statistical Extremes." If price reaches these levels, it is entering a 2-sigma move (an outlier event).
The introduction of Naive Bayes
While everyone obsesses over KNN, I decided to do a little curve ball and try something new, most notably Naive Bayes.
While Monte Carlo is blind to current sentiment (it only cares about volatility and average returns), implementing a Naive Bayes Classifier allows the indicator to be highly observant. It looks at the relative volume and momentum to determine if the current bar looks like a winner or a loser based on the last x bars of training data.
Interpreting the Table
The NB Analysis table in the top right is your tactical dashboard:
Win Prob: This is the Posterior Probability . It’s the calculated likelihood that the current market conditions will lead to a positive price move.
Example: 65% means the training data of the current volume and momentum is historically skewed toward bulls.
MC Median: This pulls the final price point from the white Monte Carlo line. It gives you a specific price target for the end of your forecast horizon.
Rel Volume: Shows how much effort the market is putting in compared to its 50-period average. High volume + high Win Prob is a high-conviction signal.
Signal (LONG/SHORT): A binary output. If Win Prob > 50%, it flips to LONG.
Confidence: This filters the noise. If the Win Prob is between 40% and 60%, the model is essentially tossing a coin (Moderate). If it hits >70% or <30%, the statistical evidence is strong (High).
Using this tool
Ah yes, the practicality. Boring but important.
The most effective way to use this tool is to look for Convergence:
Check the Table: Is the Signal "LONG" with "HIGH" Confidence?
Check the Forecast: Does the Monte Carlo Median (White Line) have an upward slope?
Execute: Use the 25% (Solid Red) or 05% (Dashed Red) lines as buy the dip zones within a bullish forecast. Conversely, use the 95% (Dashed Green) as a logical place to take profits or tighten stops.
Customizations
In the user settings menu, you can adjust:
The lookback or training length for the Monte Carlo Simulations
The forecast length
The training length for the Naive Bayes model
Some general tips are:
Make sure your lookback is the same size or larger than your forecast
Match the forecast length with your trading horizon. If you want to be in no more than 1 hour on the 1-Minute chart, make sure you are setting this for a forecast horizon of 60 candles.
The Cherry on Top
In professional quantitative finance, we don't just guess; we model. This indicator uses a Log Normal Random Walk for the Monte Carlo and a Gaussian PDF (Probability Density Function) for the Naive Bayes, bringing institutional-grade math to the Pine Script environment. It treats trading as a game of probabilities, not certainties.
And there you have it! Hopefully you find this helpful and enjoy.
Thanks for reading and checking it out!
HTF Volume Spike & Imbalance Projection [LuxAlgo]The HTF Volume Spike & Imbalance Projection indicator provides a comprehensive multi-timeframe analysis tool that projects higher timeframe (HTF) candle structures, volume spikes, and volume profiles directly onto the current chart. This script aims to bridge the gap between different time horizons, allowing traders to identify institutional interest, supply/demand zones, and significant order flow imbalances within an HTF context without ever switching timeframes.
🔶 USAGE
🔹 The Core Concept
While standard charts only show the OHLC of a candle, this indicator deconstructs a single large candle (e.g., a 1-hour candle) into its individual internal components. It looks for Volume Spikes (moments of high activity) and Stacked Imbalances (where aggressive participants stepped in repeatedly) to reveal the "story" inside the bar.
🔹 Reading the Projection
The indicator projects two visual blocks to the right of your current price action:
Ghost Bar (Left Block): This represents the previous completed HTF candle. It is faded out to provide historical context and show where the previous "value" was established. Current Bar (Right Block): This represents the HTF candle currently forming and updates in real-time as new data arrives.
Each block is divided into three distinct visual sections:
HTF Candle: A standard candle representation of the higher timeframe (Open, High, Low, Close). Scatter Plot (The Bubbles): Every bubble represents a volume spike that occurred on the lower timeframe (LTF) granularity. The size of the bubble indicates higher volume, while the color indicates buying (Green) or selling (Red) pressure. Dotted lines connect the High and Low of the candle to this zone for reference. Volume Profile (The Histogram): Displays the total distribution of volume across the entire HTF candle, highlighting high-volume nodes.
🔹 Key Feature: Stacked Imbalances
Look for the solid colored boxes behind the scatter plot bubbles. These "Stacked Imbalances" appear when 3 or more volume spikes of the same direction occur at the same price level within one HTF candle.
Bullish Imbalance (Green Box): Indicates a strong area of buying interest. These often act as support levels. Bearish Imbalance (Red Box): Indicates a strong area of selling interest. These often act as resistance levels.
🔹 On-Chart Bubbles
The bubbles visible on the actual candles of your chart are the same spikes shown in the projection.
A cluster of large bubbles at the top of a candle indicates exhaustion or heavy selling at the highs. Large bubbles at the bottom indicate a strong floor being built by buyers.
🔶 DETAILS
The indicator uses
request.security_lower_tf()
to pull granular data. By analyzing volume at this "Spike Granularity," the script can pinpoint specific price levels where volume exceeded a moving average by a user-defined multiplier.
🔹 Trading Tips
Identify Point of Control: Use the Volume Profile in the projection to see where the "Value" is. Trading usually gravitates back to high-volume areas. Trade the Imbalances: When price returns to a previously formed "Stacked Imbalance" box from the Ghost Bar, look for a reversal. These are high-probability areas where institutional activity was detected. Volatility Detection: If the scatter plot is empty, the current move is "low conviction" (low volume). If it's filled with large bubbles, big players are active.
🔶 SETTINGS
🔹 Higher Timeframe (Anchor)
HTF Anchor Timeframe: Defines the timeframe of the main projection. The "Auto" setting selects a logical HTF based on your current chart.
🔹 Volume Spike Detection
Spike Granularity: The lower timeframe used to find individual spikes. Volume Spike Multiplier: The threshold used to define a "spike" relative to the volume average. Volume MA Length: The lookback period for the volume average.
🔹 Advanced Features
Show Ghost (Previous) Bar: Toggles the visualization of the previous HTF period. Highlight Stacked Imbalances: Enables detection of price zones with high-frequency aggressive volume. Show Anchor Connection Lines: Toggles lines connecting chart levels to the projection block. Show Bubbles on Chart Candles: Toggles the LTF volume spikes directly on the main chart bars.
Swing Structure Forecast [BOSWaves]Swing Structure Forecast - Statistical Swing Projection System with Volatility-Adaptive Support and Resistance Detection
Overview
Swing Structure Forecast is a statistically-driven swing analysis system that maps directional price structure through confirmed pivot identification, where support and resistance zones construct automatically at each swing extreme and a probabilistic forecast beam projects the next swing leg using aggregated historical swing measurements.
Rather than applying fixed price targets, universal extension ratios, or lagging directional filters, zone boundaries, forecast direction, and projection magnitude are governed by structural pivot confirmation, ATR-proportioned zone sizing, and rolling statistical measurement of completed swing history across a configurable sample window.
This produces a continuously refreshed structural map alongside a data-grounded forward projection. Zones breathe with volatility cycles and forecasts are calibrated to the instrument's own measured behaviour rather than theoretical constants or fixed multiples.
Price is therefore assessed against structurally-anchored zones derived from confirmed swing pivots, with directional expectations built from the statistical record of prior completed legs rather than external reference points.
Conceptual Framework
Swing Structure Forecast is built on the premise that genuine support and resistance originate at confirmed swing extremes, and that the statistical character of completed swing legs contains meaningful information about the magnitude and duration of the move that will follow.
Standard projection methodologies apply predetermined ratios that treat every instrument and market condition as interchangeable. This framework instead extracts magnitude expectations from the instrument's own swing record, building an evidence base from recent completed legs and distilling it into a statistically-grounded projection originating at the current confirmed pivot.
Three core principles shape the design:
Support and resistance zones should originate at structurally confirmed swing highs and lows, not at indicator crossovers, arbitrary distances, or price patterns lacking pivot confirmation.
Zone width must respond to prevailing volatility, expanding proportionally when ATR is elevated and compressing when market conditions quieten.
Forecast targets and projection uncertainty should be derived from the distribution of the instrument's own recent swing history, with variability expressed visually rather than hidden behind a single projected level.
This repositions price structure work from passive historical reference into an active, instrument-specific projection framework that updates with each new confirmed swing.
Theoretical Foundation
The indicator unifies structural pivot detection, ATR-responsive zone construction, rolling statistical aggregation, and Fibonacci extension mapping.
Swing highs and lows are established through a rolling highest/lowest comparison across a configurable lookback window, accepting only pivots surrounded by sufficient structural confirmation on both sides. A 200-period ATR provides a slow-moving, stable volatility reference that scales zone thickness and beam width proportionately across varying instruments and timeframes. Completed swing percentages and durations populate a rolling sample array, with three aggregation modes — weighted, average, and median — giving users direct control over how heavily recent legs are weighted against older history. Standard deviation across this sample governs beam width, producing narrow projections when swing history is consistent and widening the beam when prior legs have varied significantly in magnitude.
Four internal systems work in coordination:
Pivot Detection Engine : Confirms swing highs and lows through multi-bar structural comparison, withholding confirmation until price movement validates the extreme and eliminating repainting.
Zone Construction System : Builds dual-layer ATR-proportioned boxes at each confirmed pivot, applying progressive opacity reduction with age and monitoring for structural breach events.
Forecast Engine : Processes the rolling swing sample through the selected statistical method and casts the next projected leg as a smoothed cone beam originating at the current pivot, scaled by historical variance.
Fibonacci Extension System : Deploys individually toggleable extension levels beyond the primary forecast target, each with a fully configurable ratio for defining continuation objectives.
This structure keeps the structural map and forward projection permanently coupled, refreshing in unison whenever a new swing confirms.
How It Works
Swing Structure Forecast processes price through a structured sequence of pivot-aware operations:
Pivot Confirmation : Bar highs and lows are continuously compared against a rolling window of configurable length. A swing high locks in once price retreats sufficiently from the peak; a swing low locks in once price advances sufficiently from the trough, ensuring no repainting occurs.
Zone Placement : A dual-layer box anchors at each confirmed pivot. An outer boundary encloses the broader reaction area and an inner zone concentrates the higher-probability interaction region.
Age-Based Fading : Zone opacity diminishes progressively as elapsed bars accumulate since formation, weighting recent structural levels visually above older historical context.
Breach Detection : A close beyond a zone's anchor level triggers conversion to a dotted outline and initiates an automatic removal sequence, purging invalidated structure from the chart.
Swing Recording : Each completed leg is logged as a percentage magnitude and a bar duration into the rolling sample array, capped at the user-defined sample count with oldest entries discarded first.
Statistical Aggregation : The selected method, weighted, average, or median, resolves the sample into an expected magnitude and duration for the forthcoming swing leg.
Beam Construction : A three-layer cone extends forward from the current pivot anchor using smoothstep-eased interpolation, with width proportional to sample standard deviation and opacity grading across nested layers.
Target Zone : A bounding box placed at the beam terminus presents the projected price level and expected percentage move, with box height communicating the degree of forecast uncertainty.
Fibonacci Extensions : Configurable ratio levels project beyond the primary target, establishing pre-mapped objectives for continuation moves that exceed the base projection.
These processes collectively sustain a live structural framework and a statistically-grounded projection that regenerates with every newly confirmed swing pivot.
Interpretation
Swing Structure Forecast should be read as a structural boundary map combined with a probabilistic directional projection:
Support Zones (Green) : Constructed at confirmed swing lows, marking price regions where prior downside pressure exhausted and upward reversals originated.
Resistance Zones (Red) : Established at confirmed swing highs, identifying areas where prior upside pressure stalled and downward reversals began.
Zone Opacity : Communicates structural age. Vivid zones reflect recent pivot formation; subdued zones represent older levels retained for broader historical context.
Broken Zones : Transition to faint dotted outlines on breach, preserved as reference markers without visually competing with structurally intact levels.
Forecast Beam : Extends forward from the most recently confirmed pivot, projecting the statistically expected next leg. Cone width encodes uncertainty drawn from sample variance.
Narrow Beam : Prior swing history shows consistent magnitude, indicating relatively high projection confidence.
Wide Beam : Prior swing history shows significant variability, indicating greater uncertainty and warranting additional confirmation before acting.
Target Zone and Label : Mark the statistically derived price destination alongside expected percentage move and absolute price level.
Fibonacci Extensions : Pre-mapped levels beyond the primary target defining structured continuation objectives for extended directional moves.
Path Markers : Dot markers positioned along the beam centerline with opacity fading toward the target, conveying projected trajectory and directional progression.
Structural context, beam width, and sample consistency are more significant than any individual projected value in isolation.
Signal Logic and Visual Cues
Swing Structure Forecast operates through two principal visual frameworks:
Structural Zones : Continuously maintained support and resistance boxes anchored at confirmed pivots. Intact zones carry unbroken structural relevance; broken zones document levels that price has already closed through and structurally dismissed.
Forecast Beam : Repositions automatically on every new swing confirmation, simultaneously refreshing the beam geometry, target zone, path markers, and Fibonacci extensions to reflect the updated pivot origin and current statistical aggregation.
Alert conditions trigger on confirmed swing high and swing low events, supporting systematic structural monitoring without requiring active chart observation.
Strategy Integration
Swing Structure Forecast applies across structure-based, mean-reversion, and trend-continuation trading methodologies:
Structure-Referenced Entries : Treat intact zones as interaction boundaries for entry decisions, assigning greater weight to recently formed levels over aged, heavily faded structure.
Instrument-Calibrated Targets : Use the statistical projection as a primary take-profit reference built from the instrument's own measured swing history rather than applied universal ratios.
Beam Width Conviction Scaling : Adjust confirmation requirements relative to current beam width. Wide beams call for additional validation before committing; narrow beams reflect historically stable swing magnitude.
Fibonacci Continuation Planning : Reference extension levels beyond the primary target when trending conditions suggest the initial projection may be exceeded.
Broken Zone Flip Monitoring : Track recently breached zones as candidate reversal levels where former support may transition to resistance and vice versa following structural invalidation.
Multi-Timeframe Structural Context : Reference higher-timeframe zones as macro boundaries while applying lower-timeframe forecast projections for entry precision and target identification.
Sample Population Patience : Defer high-conviction treatment of projection outputs until the sample window has accumulated sufficient completed swings, particularly on instruments or timeframes with limited history.
Technical Implementation Details
Core Engine : Rolling highest/lowest pivot detection with configurable lookback and no-repaint confirmation logic
Zone Construction : Dual-layer ATR-proportioned boxes with progressive opacity fading, breach detection, and automatic invalidation removal
Statistical Model : Weighted, average, or median aggregation across configurable rolling sample with standard deviation uncertainty scaling
Forecast Geometry : Smoothstep-eased three-layer polyline beam with standard deviation width scaling and graduated opacity
Target Visualisation : Projection label with percentage move and price level enclosed by uncertainty-proportioned target box
Fibonacci System : Five independently toggleable extension levels with fully configurable ratios
Alert Coverage : Swing high confirmation and swing low confirmation events
Performance Profile : Optimised for real-time execution across all timeframes with configurable zone capacity and sample limits
Optimal Application Parameters
Timeframe Guidance:
1 - 15 min : Near-term swing structure with short-horizon projection for intraday approaches
1H - 4H : Intraday to multi-session structural mapping with intermediate forecast range
Daily - Weekly : Macro swing structure identification with extended projection targets
Suggested Baseline Configuration:
Swing Length : 16
Zone Width (ATR) : 0.3
Max Level Age : 300 bars
Samples : 20
Method : Weighted
Forecast Bars : 5
Fib Extensions : 1.0, 1.272, 1.618 active
These suggested parameters serve as a starting baseline; their effectiveness varies with the instrument's volatility profile, characteristic swing cadence, and preferred zone density, so incremental adjustment across multiple session types is recommended before drawing performance conclusions.
Parameter Calibration Notes
Apply the following refinements to adjust behaviour without modifying core logic:
Zones too wide : Lower Zone Width (ATR) to narrow zone boundaries, particularly on lower timeframes where ATR values produce oversized zones relative to typical price movement.
Too many zones forming : Raise Swing Length to impose stricter structural requirements before a pivot qualifies for zone creation.
Beam excessively wide : Sample history contains high variance. Raise Samples to dilute outlier legs or switch to Median to limit their influence on the projected magnitude.
Projection slow to reflect recent behaviour : Lower Samples or switch to Weighted method to concentrate projection weight on the most recently completed swing legs.
Significant pivots going undetected : Lower Swing Length to increase sensitivity and qualify shorter structural moves as confirmed pivots.
Forecast visual range misaligned with chart : Modify Forecast Bars to adjust how far projection visuals extend rightward without altering the underlying price target calculation.
Stale levels persisting on chart : Reduce Max Level Age to accelerate removal of older unbroken zones, keeping structural reference anchored to recent pivot history.
Adjustments should be applied incrementally and assessed across varied session conditions rather than calibrated against a single market period.
Performance Characteristics
High Effectiveness:
Markets exhibiting rhythmic swing sequences with clearly defined structural turning points
Instruments where volatility follows identifiable expansion and contraction patterns that ATR captures proportionately
Trend-continuation approaches targeting measured extensions derived from the instrument's own swing record
Mean-reversion strategies using confirmed structural zones as primary entry and exit reference boundaries
Reduced Effectiveness:
Directionless, low-conviction conditions generating frequent shallow pivots that populate the sample with structurally insignificant measurements
Event-driven or gap-heavy sessions producing swing magnitudes that are unrepresentative of normal instrument behaviour
Instruments with erratic or non-stationary volatility profiles where ATR-based proportioning loses consistency
Early sessions on a given timeframe before sufficient completed swings have accumulated to produce statistically reliable projections
Integration Guidelines
Confluence : Pair with BOSWaves volume tools, order flow indicators, or broader market structure analysis to reinforce zone and forecast interpretation
Sample Discipline : Reserve high-conviction treatment for projections generated once the sample window is fully populated with completed swings
Breach Acceptance : Treat breached zones as structurally void and resist anchoring expectations to levels price has already invalidated with a closing breach
Beam Width Respect : Read a wide beam as a requirement for additional confirmation before acting, not permission to disregard the projection entirely
Directional Consistency : Sustain bias aligned with the current forecast direction until a newly confirmed swing pivot shifts the projection origin
Timeframe Confluence : Highest-quality structural setups emerge when active zones and forecast direction correspond across multiple timeframes simultaneously
Disclaimer
Swing Structure Forecast is a professional-grade swing structure and statistical forecasting tool. All projections are derived from historical swing behaviour and represent probabilistic expectations rather than assured outcomes. Performance depends on the consistency of prior swing history, prevailing market conditions, parameter selection, and disciplined application. BOSWaves recommends deploying this indicator as one component within a comprehensive analytical framework incorporating trend context, volume analysis, and rigorous risk management practices.
TASC 2026.04 A Synthetic Oscillator█ Overview
This script implements a Synthetic Oscillator as presented by John F. Ehlers in the April 2026 TASC Traders' Tips article "Avoiding Whipsaw Trades". The indicator aims to provide a smooth, low-lag oscillator for timely trading signals by dynamically mapping a sine wave to price data.
█ CONCEPTS
"Whipsaw" trades are a common issue in algorithmic trading. They occur when the market quickly moves against a position, causing the trader/trading system to reverse their position at a loss, and then the market reverses again and continues in the original direction. Such trades occur because the trading system is attempting to react quickly to market moves instead of focusing on broader market cycles.
A typical solution for reducing whipsaw trades is to apply linear filters to smooth the data and emphasize specific cycles. However, linear filters cannot have both a smooth response and a low computational lag. Therefore, strategy designs utilizing linear filters require a tradeoff between smoothness and lag.
Ehlers proposes a nonlinear indicator as a solution to bridge the gap and achieve a smooth, timely response while reducing whipsaw trades.
The Synthetic oscillator adapts to market conditions by calculating a dynamic sine wave from the estimated instantaneous dominant cycle over a range of periods.
The process to calculate the indicator is as follows:
Smooth the price data with a 12-bar Hann Window filter to reduce high-frequency noise, which can affect dominant cycle estimates.
Band-pass filter the windowed data with a two-pole high-pass filter and a SuperSmoother filter to focus on the range of cycles between a specified lower bound and upper bound, and normalize the result using the filter's 100-bar root mean square (RMS).
Calculate the one-bar rate of change (ROC) in the oscillator from step 2, and normalize the ROC using its 100-bar RMS.
Estimate the instantaneous dominant cycle from the oscillators in steps 2 and 3 by treating the series as a complex waveform , where the first oscillator represents the waveform's band-limited "real" component ("I"), and the second represents the band-limited "imaginary" component ("Q").
Cumulatively sum the reciprocal of the dominant cycle (i.e., the dominant frequency ) to obtain the phase angle of the sine wave.
To reduce cumulative errors and lag in the phase angle calculation, compute a secondary band-bass filter from a high-pass filter and the UltimateSmoother, and reset the angle to 0 or 180 degrees when that filter crosses above or below 0.
Calculate the Synthetic Oscillator as the sine of the final phase angle.
█ USAGE
This indicator displays the Synthetic Oscillator and a horizontal zero line in a separate pane. Users can analyze the crossings between the oscillator value and 0, or the behavior of the oscillator as it reaches 1 or -1, to derive potential timely trading signals.
Ehlers notes in the article that the peaks and valleys of the Synthetic Oscillator can provide signals a little too early, depending on the settings and context. Therefore, he recommends applying another smoother to the oscillator, such as a Hann Window filter with an optimizable length, to adjust timing as necessary.
█ INPUTS
This indicator uses multiple hardcoded parameters based on the implementation in Ehlers' article. However, users can customize the source series and the upper and lower bounds of the calculations:
Source Series: The series of values to process.
Lower Bound: The smallest cycle in the passband of the filters, and the lower limit of the dominant cycle estimate.
Upper Bound: The largest cycle in the passband of the filters, and the upper limit of the dominant cycle estimate.
Volume Spread Analysis IQ [TradingIQ]Hello Traders!
🔹Volume Spread Analysis IQ
This indicator was most voted on for our indicator competition - so here it is! Hope you guys like it :D
Volume Spread Analysis IQ is a chart-reading tool built to help traders judge effort, result, and background context in a way that is visual and practical.
Instead of forcing you to interpret volume and spread in isolation, this indicator organizes what the bar is doing into a readable structure so you can quickly see when the market is showing:
low participation
high participation
narrow or wide spread
potential hidden strength
potential hidden weakness
contextual VSA signals such as No Demand, No Supply, Upthrusts, Shakeouts, and Stopping Volume
🔹Why Effort vs Result Matters in Volume Spread Analysis
The following information is relevant to VSA interpretation.
In any market, price movement is the visible outcome of an underlying battle between buyers and sellers. Volume represents the effort being applied in that battle, while the spread of the candle reflects the result of that effort.
When effort and result move together, the market is behaving efficiently. High effort producing a large price move suggests strong conviction and participation. In trending conditions this often confirms that the dominant side of the market is still in control.
However, when effort and result begin to diverge, it can reveal hidden information about what is happening beneath the surface.
For example:
High effort with very little upward progress may indicate that strong selling pressure is absorbing buyers. Even though buyers are active, their effort is not producing meaningful results. This type of imbalance can appear before weakness develops.
Likewise, high effort with very little downward progress can signal that sellers are being absorbed by hidden demand. Large amounts of selling activity fail to push price lower, suggesting accumulation may be taking place.
Low effort situations are also informative. A rally with very low effort often lacks participation and can signal weak demand, while a selloff with very little effort can suggest that selling pressure is fading.
From a structural perspective, the effort/result relationship helps traders distinguish between moves driven by genuine participation and moves that occur simply because the market is temporarily thin. This distinction can be important when evaluating breakouts, pullbacks, or potential reversals.
In short, effort tells you how hard the market is trying to move, while result tells you how successful that attempt actually was. When these two fall out of balance, it often reveals shifts in supply and demand before they become obvious on price alone.
🔹What the indicator shows🔹
🔸Background bias
Each candle is tinted to reflect the recent VSA background. This helps you judge whether the market is currently leaning strong, weak, or neutral based on the recent flow of bullish and bearish evidence.
🔸Effort vs. Result view
The lower panel converts both volume and spread into easy-to-read rankings from 1 to 10.
Effort represents how active the market is.
Result represents how much price actually moved.
🔸Per-candle labels
Optional candle labels show a simple readout for each bar:
R = Result rank
E = Effort rank
🔸Effort vs. Result summary table
A live table on the chart shows the current effort rank, result rank, and the current interpretation of their relationship.
🔸Key VSA event markers
The script marks classic VSA conditions directly on the chart when they appear in the proper context:
No Demand
No Supply
Upthrust
Shakeout
Stopping Volume
🔹How to read it
Effort asks: How much activity came into this bar?
Result asks: How much did price actually move?
Background asks: Is recent behavior supporting strength or weakness?
This combination helps separate bars that look dramatic from bars that are actually meaningful.
For example:
High effort with poor upward result can hint that buying is struggling
High effort with poor downward result can hint that selling is being absorbed
Low effort rallies can warn of weak demand
Low effort selloffs can suggest supply is drying up
🔹Signal overview
No Demand
Highlights weak upward bars with low participation.
No Supply
Highlights weak downward bars where selling pressure appears limited.
Upthrust
Marks a rejection bar that appears in weak background conditions and can warn of downside risk.
Shakeout
Marks a lower rejection bar that appears in strong background conditions and can suggest bullish intent.
Stopping Volume
Flags heavy selling activity that may be halting a move lower. Context matters. In strong background it can be bullish. In weak background it can simply pause price before weakness resumes.
🔹Why this indicator is useful
Many traders can see volume. Far fewer can quickly judge whether that volume actually meant anything.
This tool is designed to help with exactly that.
It gives you:
a cleaner way to read volume and spread together
fast recognition of effort versus result imbalance
background context instead of isolated signals
VSA-style event labeling without requiring a cluttered chart
friendly settings for newer users, plus advanced overrides for experienced users
🔹Best use cases
confirming whether breakouts have real participation
spotting weak rallies and weak selloffs
judging whether aggressive bars are efficient or wasteful
finding VSA-style reversal or continuation clues
adding context to your existing market structure, liquidity, or price action model
🔹Important note
This indicator is a chart-reading tool , not a promise of outcomes. VSA works best when signals are interpreted in context, not taken mechanically one by one.
Use the background, the effort/result relationship, and the signal location together.
Important consideration
We scoured the internet, books, you name it to find detailed information on VSA techniques. That said, information is sparse and conflicting depending on where you look. We relied mostly on gold standard literature. However, the information in that literature is far from objective.
Many descriptions are similar to…
“An upthrust is a bar that pushes up and then fails, showing rejection of higher prices, usually in a weak background.”
Coding this requires interpretation by the engineer - there aren’t exact rules to follow. This means the indicator’s presentation of an upthrust, shakeout, etc. might not always align with your definition of those events.
You can customize the settings to force the indicator to better match your interpretation.
🔹Inputs you can customize
The script includes simple user-friendly controls such as:
What counts as a small body
What counts as a long wick
How strict close location should be
How strict spread and volume classifications should be
How much background proof you want before the indicator leans strong or weak
Whether to use broader or more traditional No Demand / No Supply logic
Whether Shakeouts and Upthrusts should require clear trend alignment
Advanced users can also enable raw threshold overrides for finer control.
🔹Closing Notes
And that’s about it!
This script might receive updates in the future if the community asks for it - stay tuned!
Thank you TradingView as always!
Market Microstructure AnalyticsThe Hidden Toll on Every Trade
Every time you buy or sell a financial instrument, you pay a cost that never appears on your brokerage statement. It is not a commission. It is not a fee. It is the spread between the price at which someone is willing to sell to you and the price at which someone is willing to buy from you. That gap, measured in ticks, basis points, or fractions of a percent, is the bid-ask spread. Over a single trade it looks small. Over thousands of trades, across a year, for a fund managing billions, it compounds into one of the most significant sources of performance drag in all of finance.
For decades, institutional traders have measured this cost obsessively. Research desks at hedge funds and investment banks have dedicated entire teams to understanding when spreads are wide, why they widen, who is causing them to widen, and what that signal implies about the near-term behaviour of a market. Retail traders, however, have had almost no access to this kind of analysis. The reason is simple: measuring the bid-ask spread in real time requires access to the order book, tick-by-tick trade data, and quote data that most platforms either do not provide or lock behind expensive data terminals.
This indicator changes that. Using only the OHLCV data that every chart on TradingView already contains, it reconstructs spread estimates and liquidity conditions through seven statistically validated models drawn directly from the academic market microstructure literature. It cannot replicate what a full order book feed provides, and the documentation is explicit about where the approximations are. But it gets considerably closer than anything available to the typical chart-based trader, and on short intraday charts it delivers information that is genuinely useful for both execution decisions and regime assessment.
What Market Microstructure Actually Measures
Market microstructure is the academic field that studies how prices are formed at the level of individual transactions. Its central question is not where a price will go tomorrow but how the mechanics of trading itself affect price formation right now. Two papers published decades apart established the framework this indicator builds on.
The first was by Roll (1984), who noticed something elegant: in an efficient market, the prices of consecutive trades should not be correlated with each other, because any predictability would be arbitraged away. But if you look at actual trade-by-trade price changes, you consistently find negative autocorrelation. Prices bounce back and forth. The reason, Roll argued, is the bid-ask spread itself. Buyers trade at the ask and sellers at the bid, so consecutive trades alternate between two price levels. This bouncing creates a predictable negative covariance in price changes, and the size of that covariance is directly related to the size of the spread. From this insight he derived the formula S = 2 times the square root of the negative covariance of consecutive price changes. If you observe a series of trades and measure how negatively they correlate with each other, you can back out the spread without ever seeing a quote.
The second foundational contribution came from Kyle (1985), who approached the problem from a completely different angle. He asked: if a market contains some traders who have private information about the true value of an asset, how do their orders affect price? His answer was the lambda coefficient, a measure of how much the price moves per unit of net order flow. A high lambda means the market is thin and informed: each additional unit of buying or selling pushes the price significantly. A low lambda means the market absorbs flow without moving much. Lambda is not just a spread measure; it is a measure of how much information asymmetry exists in the market at any given moment. This is the adverse selection component of the spread, and it is arguably the most strategically useful signal the indicator produces.
The Spread Estimators
The first layer of computation produces four distinct estimates of the bid-ask spread, each using a different statistical approach.
The Roll (1984) estimator is the oldest and most widely cited. It computes the rolling covariance between a price change and the price change that came before it, then takes two times the square root of the negative of that covariance. One important detail: Roll's model is defined in terms of absolute price changes, not log-returns. Using log-returns introduces a scaling distortion tied to the price level of the asset, which biases the spread estimate upward at high prices. This implementation correctly uses delta-P throughout.
The Corwin-Schultz (2012) estimator takes a fundamentally different approach. Rather than looking at the serial structure of price changes, it uses the high-low range of a bar. The core insight is that the high price of any trading period is most likely a transaction that occurred at the ask, while the low price is most likely a transaction at the bid. If you look at a two-period window, the combined high-low range reflects the true price variance over those two periods plus the spread component. A single-period range conflates variance and spread; the two-period structure allows them to be separated algebraically. The resulting formula involves a decomposition using the constant k = 3 minus 2 times the square root of 2, which emerges from the statistical properties of the high-low range under continuous diffusion. Corwin and Schultz (2012) validated this estimator extensively against actual quoted spreads across thousands of US equities and found it performs well both in cross-section and over time.
The Abdi-Ranaldo (2017) estimator is the most recent of the three and, in empirical tests, the most stable. For each bar, it computes a quantity called c, defined as the log of the close price minus the average of the log-high and log-low. This is the signed deviation of the close price from the geometric midpoint of the bar's range, expressed in log-space. Abdi and Ranaldo proved that the expected value of the product of c at time t and c at time t plus one equals negative one quarter of the spread squared. This means that by measuring how negatively c correlates with the next period's c, you can recover the spread. The estimator inherits much of the intuition of Roll but anchors itself to the intrabar price range rather than the close-to-close change, which tends to reduce noise substantially. To handle cases where the high and low are identical, which occurs on 1-tick bars or extremely liquid instruments, the implementation excludes invalid pairs from the covariance calculation rather than substituting zeros, which would bias the estimate toward zero.
The effective spread proxy takes yet another approach. Rather than estimating the quoted spread, it attempts to estimate the effective spread, which is the actual cost paid by a specific trade. The formula is two times the trade direction multiplied by the distance between the transaction price and the quote midpoint. Trade direction is approximated using the tick rule, which assigns a positive sign to transactions at prices higher than the previous price and a negative sign to those at lower prices, carrying the previous sign forward when the price is unchanged. This classification method was formalised by Lee and Ready (1991) and remains the standard approach for assigning direction when quote data is unavailable. The bar midpoint substitutes for the true quote midpoint, which introduces a systematic upward bias because the high and low of a bar are extreme transaction prices, not quotes. The effective spread proxy is therefore most reliable as a relative indicator of whether transaction costs are rising or falling, rather than as an absolute estimate of the quoted spread.
The Liquidity Metrics
The second layer moves beyond spread estimation into broader liquidity measurement. The key distinction is this: the spread tells you what it costs to execute one trade right now. Liquidity metrics tell you something about the structure of the market, how deep it is, how much information is embedded in the current order flow, and how efficiently prices are absorbing volume.
The Amihud (2002) illiquidity ratio is the most widely used liquidity measure in the academic asset pricing literature. Its construction is conceptually simple: it divides the absolute value of a log return by the dollar volume of trading in the same period. What this measures is price impact per dollar traded. If a stock moves one percent and 10 million dollars changed hands, the ratio is small. If the same one percent move happened on only 50,000 dollars of volume, the ratio is large, indicating a thin market where small amounts of capital move prices significantly. Unlike the spread measures, which capture the cost of a single round trip, the Amihud ratio captures market depth. This implementation uses dollar volume rather than share or contract volume, which is the correct specification for comparability across instruments at different price levels. The ratio is scaled by a factor of 100 million for display purposes; its absolute level is asset-dependent and should always be interpreted relative to the instrument's own history.
Kyle lambda, estimated here via ordinary least squares regression of price changes on signed volume, is the most theoretically sophisticated metric in the indicator. Each bar's signed volume is the total volume signed by the tick rule direction: positive if the bar closed higher than the previous bar, negative if it closed lower. The regression coefficient from regressing price changes on this signed volume is the lambda estimate. A high positive lambda means prices are moving more than expected for the amount of flow being absorbed, which is the signature of informed trading. When lambda rises, someone in the market likely knows something that others do not, and market makers are widening their spreads in response. The critical implementation detail here is that the volume must not be normalised before the regression. Normalising the signed volume changes the regression coefficient from a price-impact-per-share measure to a dimensionless sensitivity measure, which is a different quantity and does not correspond to Kyle's original model.
The Parkinson (1980) range-based volatility estimator serves a supporting role: it estimates intrabar variance from the high-low range using the formula sigma-squared equals one over four times the natural log of two, multiplied by the square of the log ratio of high to low. This estimator is approximately five times more statistically efficient than the classic close-to-close variance estimator for the same number of observations (Parkinson 1980). Its role in this indicator is to help decompose the high-low range: the range reflects both volatility and the spread, and the ratio of the composite spread estimate to the Parkinson volatility tells you which component is dominant at any given time.
The Composite and the Regime System
Having computed multiple independent estimates of the spread, the natural question is how to combine them. Simple averaging is theoretically suboptimal when the estimators have different levels of noise. The precision-weighted composite assigns each estimator a weight inversely proportional to its robust variance, so that noisier estimators contribute less to the final reading.
The key word is robust. Rather than computing standard rolling variance, which is dominated by extreme observations and can make a normally well-behaved estimator look unreliable for weeks after a single outlier bar, this implementation uses a variance estimator based on the Median Absolute Deviation, or MAD. The MAD is the median of the absolute deviations from the rolling median. Multiplied by the consistency factor 1.4826, it provides an equivalent to the standard deviation that is resistant to outliers with a breakdown point of 0.5, meaning up to half the observations in a window can be extreme values without corrupting the estimate. This approach follows Rousseeuw and Croux (1993), who established the formal properties of MAD-based scale estimators.
Two further safeguards stabilise the weights. A ridge regularisation term, set to five percent of the mean robust variance across active estimators, prevents any weight from exploding toward infinity when an estimator is temporarily near-constant. And a weight cap, set by default at 70 percent of the total, prevents any single estimator from dominating the composite during regimes where it happens to be locally smooth. The live weights are displayed in the dashboard so the user can always see how the composite is currently distributed.
The regime detection system answers the question of whether the current spread level is historically unusual. This is done through a robust z-score: the composite spread is compared to its rolling median, and the deviation is normalised by the MAD. The result is a standardised score that tells you how many robust standard deviations the current spread is from its recent typical level. A score of two or above signals a statistically unusual widening event. The same procedure is applied independently to the Amihud illiquidity ratio and to the absolute value of Kyle lambda.
These three scores are then combined into the Liquidity Stress Index, computed as their equal-weighted average after each component has been winsorised at plus or minus three robust standard deviations. The winsorisation prevents a single extreme reading in one dimension from overwhelming the composite. Each component is then winsorised before averaging to prevent a single extreme dimension from dominating. The result is mapped to a zero-to-100 scale using the hyperbolic tangent function, where 50 represents neutral conditions, readings in the 65 to 80 range indicate elevated stress, and readings above 80 indicate severe stress across multiple liquidity dimensions simultaneously.
Practical Use Cases
For a retail trader, the most immediately useful output is the composite spread and its regime classification. When the composite spread widens and the regime indicator shifts to Elevated or Stress, entering a new position becomes more expensive than usual. On illiquid instruments this widening can be dramatic, consuming a significant fraction of the expected profit in transaction costs before the trade even begins. Conversely, when spreads are compressed, the market is functioning efficiently and execution is cheap. Timing entries and exits around spread conditions is a simple, evidence-based way to reduce the invisible drag that erodes returns over time.
The spread trend indicator, which compares a five-period exponential moving average of the composite spread against a twenty-period average, provides a simple directional signal. A widening trend often precedes a period of higher volatility, lower liquidity, or increased uncertainty. This does not tell you which direction the price will move, but it tells you that the environment is becoming less predictable and more costly to trade, which is operationally important information.
For professional traders and systematic strategy developers, the Kyle lambda signal has specific applications. When lambda is elevated relative to its own history, which the dashboard displays as the adverse selection z-score, it indicates that price changes are disproportionate to the measured order flow. This is consistent with the presence of informed traders, a phenomenon central to the theoretical work of Kyle (1985) and Glosten and Milgrom (1985). Elevated adverse selection is one of the clearest early warning signs of an impending directional move driven by asymmetric information, such as pre-announcement positioning, earnings whispers, or macroeconomic data leakage.
The Amihud illiquidity ratio is particularly valuable for cross-asset comparisons and for monitoring the liquidity conditions of a specific instrument over time. Portfolio managers can use it to time their entries and exits in less liquid securities: entering when illiquidity is below its historical median and exiting before a period of known low liquidity such as a holiday period or low-volume session. Research by Amihud and Mendelson (1986) established that expected returns are positively correlated with illiquidity, meaning that investors demand higher compensation for holding assets where transaction costs are high. The illiquidity z-score in this indicator allows that premium to be tracked in real time.
The spread-to-volatility ratio is a metric that practitioners familiar with the work of Corwin and Schultz (2012) will recognise immediately. It expresses the composite spread as a percentage of the Parkinson volatility estimate. When this ratio is high, the spread accounts for a large fraction of the observed price range, which typically indicates a market where market makers are cautious and price discovery is slow. When it is low, the price range is driven primarily by genuine information, not by the mechanics of the spread. This ratio is useful for distinguishing between a volatile and actively traded market, which is generally healthy, and a wide-spread market that looks volatile but is actually just illiquid.
The Liquidity Stress Index in its scaled zero-to-100 form provides an accessible summary for traders who do not want to track multiple metrics simultaneously. During normal market conditions the reading sits near 50. When all three components, the spread, the illiquidity ratio, and the adverse selection estimate, are simultaneously elevated relative to their own histories, the index rises sharply. The historical examples of this pattern occurring together include the flash crash of May 2010, the August 2015 China-driven volatility spike, the COVID-19 crash of March 2020, and various cryptocurrency deleveraging events. In each case, the simultaneous widening of spreads, collapse of market depth, and spike in price impact coefficients preceded the most severe price dislocations by enough time to be actionable.
Configuration and Settings
The Estimation Window controls the rolling window for all covariance and liquidity calculations. A shorter window, around 10 to 20 bars, makes the estimators more responsive to recent changes but increases noise. A longer window, around 50 bars, produces smoother estimates that better reflect structural conditions but lag more. The default of 20 is a reasonable starting point for most intraday timeframes.
The EMA Smoothing parameter applies an exponential moving average to each raw spread estimate before it is used in the composite and displayed on the chart. This reduces bar-to-bar noise without introducing the same lag that a longer estimation window would create. Setting it to 1 disables smoothing entirely, which is useful for research purposes but not for trading.
The Regime Window determines how far back the robust z-scores look when assessing whether current conditions are unusual. A setting of 100 means the indicator asks whether the current spread is unusual relative to the last 100 bars. For daily charts, 100 bars is approximately five months of trading. For tick charts, it represents the most recent 100 tick bars. This parameter should be set large enough to capture at least one full market cycle of the relevant timeframe.
The Maximum Composite Weight prevents any single estimator from being assigned more than the specified fraction of total weight. The default of 70 percent is conservative; in practice, during regimes where all three estimators agree and produce similar variances, the weights tend to distribute fairly evenly. The cap becomes most important when one estimator is temporarily quiet and its MAD-based variance falls to near zero, which would otherwise assign it almost all the weight.
The LSI Winsorisation Cap limits the influence of extreme readings in any single component before they contribute to the Liquidity Stress Index. At the default of three robust standard deviations, a reading of ten, which would represent a truly exceptional event, contributes the same as a reading of three. This prevents a single data anomaly or calculation artifact from permanently elevating the stress index.
Structural Limitations
No representation is made that these outputs are equivalent to actual exchange quote data. They are not. TradingView provides bars, not tick-by-tick trades, and the academic models on which this indicator is based were developed for transaction-level data. The Roll estimator assumes that each observation is a single trade; when a bar aggregates hundreds or thousands of trades, the covariance structure it observes is a convolution of many individual trade-level covariances, and the result understates the true spread. This bias grows with bar duration and trade frequency. On 1-tick or 5-tick bars the bias is minimal; on daily bars it can be substantial.
The tick rule classification, which assigns trade direction to bars and underpins both the Kyle lambda and effective spread estimates, was designed for individual trades. Applied to the close price of aggregated bars, it misclassifies a material fraction of bars. Ellis, Michaely and O'Hara (2000) documented misclassification rates of 30 to 50 percent on daily stock data. On short intraday bars the performance is better, but it never reaches the accuracy achievable with actual quote data.
The rolling MAD computation is a streaming approximation to the exact finite-window MAD. In a stationary process the difference is negligible and the heavy-tail robustness property is preserved. In rapidly changing regimes the approximation introduces a small second-order error that does not materially affect the interpretation of the outputs.
The effective spread proxy suffers from a systematic upward bias because it uses the bar midpoint rather than the true quote midpoint. This bias is largest when the intrabar range is wide relative to the actual spread, which is precisely when the estimate is most needed. On very short tick bars the range collapses toward the actual spread and the bias diminishes, but on longer bars the effective spread reading should be treated as an upper bound rather than a point estimate.
Pine Script v6 introduced the built-in variables bid and ask, which return the current best bid and ask prices from a connected broker feed when accessed on the 1-tick timeframe via request.security(syminfo.tickerid, "1T", bid) and request.security(syminfo.tickerid, "1T", ask). This is a genuine improvement over bar-based proxies for the single most recent bar. However, these variables carry three constraints that prevent them from replacing the statistical estimators in this indicator. First, they carry no historical record: the values exist only at the current bar and return na on all prior bars, which makes it impossible to compute rolling covariances, MAD-based z-scores, or any of the regime detection logic that requires a lookback window. Second, the data is only available through a live broker connection on TradingView. Users on free accounts, paper trading environments, or instruments not covered by their connected broker will receive na throughout. Third, instrument coverage is uneven: major forex pairs, selected cryptocurrency pairs on exchanges such as Binance, and equities through brokers such as Interactive Brokers are generally supported, but futures, CFDs on many instruments, and equities through data-only feeds often return no data. The statistical estimators in this indicator therefore remain the primary analytical engine. If a broker connection is active, the live bid-ask spread retrieved via these built-in variables can serve as a real-time reference point to validate whether the rolling estimates are in a plausible range for the current session, but it cannot contribute to the historical signal calculations.
None of the outputs should be used as the sole basis for any trading decision.
References
Abdi, F. & Ranaldo, A. (2017) A Simple Estimation of Bid-Ask Spreads from Daily Close, High, and Low Prices. Review of Financial Studies, 30(12).
Amihud, Y. (2002) Illiquidity and Stock Returns: Cross-Section and Time-Series Effects. Journal of Financial Markets, 5(1), 31-56.
Amihud, Y. & Mendelson, H. (1986) Asset Pricing and the Bid-Ask Spread. Journal of Financial Economics, 17(2).
Corwin, S.A. & Schultz, P. (2012) A Simple Way to Estimate Bid-Ask Spreads from Daily High and Low Prices. Journal of Finance, 67(2).
Ellis, K., Michaely, R. & O'Hara, M. (2000) The Accuracy of Trade Classification Rules: Evidence from Nasdaq. Journal of Financial and Quantitative Analysis, 35(4).
Glosten, L.R. & Milgrom, P.R. (1985) Bid, Ask and Transaction Prices in a Specialist Market with Heterogeneously Informed Traders. Journal of Financial Economics, 14(1).
Hasbrouck, J. (2009) Trading Costs and Returns for U.S. Equities: Estimating Effective Costs from Daily Data. Journal of Finance, 64(3).
Kyle, A.S. (1985) Continuous Auctions and Insider Trading. Econometrica, 53(6).
Lee, C.M.C. & Ready, M.J. (1991) Inferring Trade Direction from Intraday Data. Journal of Finance, 46(2).
Parkinson, M. (1980) The Extreme Value Method for Estimating the Variance of the Rate of Return. Journal of Business, 53(1).
Roll, R. (1984) A Simple Implicit Measure of the Effective Bid-Ask Spread in an Efficient Market. Journal of Finance, 39(4).
Rousseeuw, P.J. & Croux, C. (1993) Alternatives to the Median Absolute Deviation. Journal of the American Statistical Association, 88(424).
Fair Value Gap Profile + Rolling POC [BigBeluga]🔵 OVERVIEW
FVG Profile builds a price-level profile based on detected Fair Value Gaps (FVGs) over a fixed lookback period.
Instead of measuring traded volume alone, this tool aggregates bullish and bearish FVG occurrences into horizontal bins, allowing traders to see where price inefficiencies are most concentrated.
Each profile level represents how many bullish and bearish FVGs formed near that price zone, along with their relative strength, imbalance, and delta volume.
🔵 CONCEPTS
FVG Detection —
• Bullish FVG: when the high two bars back is below the current low.
• Bearish FVG: when the low two bars back is above the current high.
Price Binning — The full price range of the lookback period is divided into fixed bins.
FVG Aggregation — Each detected FVG is mapped to its nearest price bin and counted.
Directional Separation — Bullish and bearish FVGs are stored separately inside each bin.
🔵 FEATURES
Bull / Bear FVG Profile —
• Green segments represent bullish FVG counts.
• Orange segments represent bearish FVG counts.
• Each bin visually shows how many FVGs occurred at that level.
Strength Percentage —
• Each bin displays a % value based on total FVG count.
• The strongest bin is normalized to 100%.
Delta Volume —
• Calculates the difference between bullish and bearish FVG volume per bin.
• Positive delta = bullish dominance.
• Negative delta = bearish dominance.
Heatmap Mode —
• Colors profile levels by relative strength.
• Color direction is driven by delta volume (bullish vs bearish).
Live FVG Visualization — Optionally plots individual bullish and bearish FVG boxes on the chart.
Profile Background — A background frame highlights the full analyzed price range.
🔵 Rolling POC Logic
Unlike a static profile, the Rolling POC moves with price.
It continuously calculates the "peak imbalance level" for the last X bars, providing a moving average of where the market's most significant gaps are forming.
🔵 Moving Average Integration
The indicator features a customizable Moving Average (SMA, EMA, WMA, VWMA, etc.).
This MA helps identify if the price is currently trending toward or away from high-density FVG zones.
An "Auto" length feature is included that scales the MA based on the selected lookback period for optimal smoothing.
🔵 HOW TO USE
Identify FVG Clusters — Strong profile levels highlight prices where inefficiencies repeatedly formed.
Directional Bias — Compare bullish vs bearish segments to determine dominance at each level.
Delta Confirmation — Use delta volume to confirm whether bullish or bearish FVGs control the zone.
Reaction Zones — High-strength bins often act as areas of interest for price reactions.
Heatmap Context — Enable heatmap to quickly spot dominant imbalance zones across the range.
🔵 CONCLUSION
FVG Profile transforms Fair Value Gaps into a structured price-level profile, revealing where inefficiencies cluster and which side dominates those zones.
By combining FVG count, directional balance, delta volume, and strength normalization, it provides a powerful way to analyze imbalance behavior beyond traditional volume profiles.
PineScript integration with Notepad++ (UDL)THIS IS NOT AN INDICATOR!
This is PineScript integration with Notepad++ text editor (NPP). It supports PineScript v6 as of January 2026. Provides autocompletion, function list and syntax highlighting for *.pine files.
Why would anyone need this?
Pine Editor doesn't provide function list yet
Pine Editor doesn't allow changing fonts or syntax colors
Provided files together define a color scheme as close to current color scheme of Pine Editor as is possible in NPP. You can change the colors to suit your needs better. For example, I provide a file that changes all user-defined functions to be colored the same way Pine Editor colors imported functions. This provides clear distinction between system and user code.
Also Dark Mode users (on Windows) might not even know that Pine Editor uses Bold for types because it also uses Consolas font which has very thin Bold. Changing a font will make (standard) types stand out more.
INSTALLATION
Go to the source code of this release
For each @ filename inside the code create such a file and ensure it has encoding 'UTF-8' without BOM
Copy the following strings up until the first empty line
Paste those strings into newly created file
Remove "// " in front of each of the strings
Save the file
Follow additional instructions for that file if any
Restart Notepad++ after creating all the files.
If you don't want the fuss with copying strings, get the files from GitHub . There you can also see installation instructions, NPP screenshots and a theme to use with this UDL.
Machine Learning Pivot Points (KNN) [SS]Hey everyone,
Been working on this one for a very long time.
1. What It Is: The Geometric DNA of a Pivot
Machine Learning Pivot Points (KNN) is a predictive structural tool that moves away from traditional lag based oscillators. Instead of waiting for a moving average crossover, this system treats price action as a Geometric Slope. By utilizing a K-Nearest Neighbors (KNN) algorithm combined with regression, the script captures the mathematical "DNA" of the price leading into a pivot and compares it against a live updating database of previous market turns.
2. The Science: Linear Regression as a Signature
If you know me well, you know I am into regression. ChatGPT even called me the "Regression Whisperer" in 2025 (not sure if that is a good thing). The core of this indicator is the Historical Slope Function, which provides data for KNN to actually make its prediction).
The Training: Unlike static models, this script "trains" itself in real-time on your specific chart. Every time a 10-bar pivot (High or Low) is confirmed, the script extracts the Linear Regression Slope of the 20 bars leading into that point.
The Library: These slopes are stored in dynamic arrays. This creates a localized "memory" of what a reversal looks like for the specific asset you are trading, and hence how the indicator "learns".
The Classification: As the current price moves, its Rolling Slope is constantly calculated. The KNN algorithm then measures the "distance" (similarity) between the current slope and the stored signatures. If the current price "curves" in a way that matches past tops, the indicator flags an Approaching Pivot High.
3. Decoding the Tables
The indicator features a dual-table Cockpit designed for high-speed decision-making.
A. The Directional Helper (Top Right): Live KNN Bias
This is your real-time classification engine.
The Neon Logic: Neon Red ▼ : The current price slope has a high mathematical similarity to historical Pivot Highs.
Neon Lime ▲: The current price slope matches the signature of historical Pivot Lows.
Confidence Metric: This represents the Cluster Similarity . If the current slope is significantly closer to one group than the other, confidence spikes. A confidence level of 90%+ suggests the current price movement is almost identical to the most powerful reversals in the recent lookback.
B. The Backtest Table (Bottom Right)
This table provides a live-calculating Proof of Concept for the current ticker.
Success Rate: Measures how often the KNN's Approaching signal successfully resulted in an actual price reversal before the opposite signal appeared.
Avg Low Move ($): The average dollar/point drop achieved after a "Pivot High" was predicted.
Avg High Move (%): The average percentage ROI gained after a "Pivot Low" was predicted.
4. Visual Cues
To differentiate from standard labels, this script uses v6 Polyline Geometry to create glowing pivots. I think we can all agree we are bored of those little basic triangles. However, because Pinescript limits at 100 polylines, the script will revert to those boring triangles over a designated history so that you can still see performance in the past with clear markers.
5. Mastering the Logic: Parameters for the Pro
KNN Clusters (K): Set to 2 by default. This tells the script to average the distance to the 2 most similar past events. Increasing this makes the model "stricter" but less frequent.
Pivot Window: This controls the "length" of the signature. A 20-bar window captures the broader curve of the trend, while a shorter window focuses on micro-reversals.
Bars Left/Right: This defines the pivots themselves. It tells the script what counts as a confirmed pivot to be added to the memory bank.
6. Concluding Remarks
This indicator represents a bridge between Statistical Geometry and Machine Learning. By focusing on the Slope Signature rather than simple price levels, it allows you to see the market's intention before the pivot is fully formed. Whether you are scalping the 1-minute or swing trading the Daily, the KNN logic adapts to the volatility and DNA of the chart in front of you.
I hope this provides a new edge to your trading workflow. Safe trades!
Market Structure Volume Profiles [Kioseff Trading]Hello traders and friends!
Introducing: "Market Structure Volume Profiles".
This script combines market structure with volume profiling and CVD to show how volume develops inside each structural changes of the market.
Instead of building one continuous profile across a session, this script creates a new volume profile for each completed BoS or CHoCH, allowing you to study the internal auction of each behavioral regime independently.
🔹Features
Detects and displays BoS and CHoCH
Builds a dedicated volume profile for each new structure
Displays profiles in Stacked or Split mode
Optional Mini Profile mode for a compact structure profile view
Shows buy-side and sell-side volume distribution
Displays POC for each profile
Optional extended POC and naked POC tracking
Displays Value Area (VA) for each completed structure
Tracks and plots CVD by structural leg
Optional market structure candle coloring
Optional structure statistics label
Uses lower timeframe data to build more detailed internal volume distribution
🔹How it works
This script tracks market structure and recalculates volume profiles for each structural change.
Whenever price confirms a Break of Structure (BoS) or Change of Character (CHoCH), the volume accumulated during that completed leg is organized into a profile. This allows you to examine how volume was distributed throughout the move, where the heaviest participation occurred, and whether buying or selling dominated the leg.
Rather than asking only where price moved, this script helps answer:
where volume concentrated during the move
whether the move was supported by participation
where value developed inside the structural range
how buy and sell volume were distributed across price
Each profile is built from lower timeframe data so that the structural leg can be broken into price levels and analyzed internally.
🔹What it shows
🔸Market Structure
The script identifies major structural events and labels them as:
BoS
CHoCH
Profiles to be tied directly to meaningful structural transitions.
🔸Volume Profile by Structure
Each completed structural leg gets its own profile, showing:
buy volume at each level
sell volume at each level
total participation across the leg
the internal shape of the auction
This makes it easier to compare continuation legs against reversal legs.
You can color BoS and CHoCH generated profiles distinctly. Making it easier to trach where each profile sits inside broader market action.
🔸Point of Control (POC)
The script can display the POC of each structural profile, showing the price level with the highest traded volume during that leg.
The script can also display the Value Area for each profile, helping identify where the majority of volume was concentrated during the structural move.
🔸CVD
The script tracks Cumulative Volume Delta throughout the current structure and plots it in the pane.
CVD can be reset by:
CHoCH
BoS + CHoCH
Day
Week
This makes it possible to study delta behavior in a structural context rather than only in a session-based one.
🔸Structure Stats
Optional structure statistics can be displayed, including:
Range
High
Low
Buy volume
Sell volume
Delta
Return
This gives a summary of the completed structural move.
🔸Why use it
This script is designed for traders who want to combine:
market structure
volume profiling
delta/CVD
auction logic
Because profiles are anchored to structure instead of session time, they can help reveal differences between:
strong continuation legs
weak continuation legs
reversal legs
imbalanced breakouts
balanced rotations
🔸Mini Profiles
The indicator has two separate drawing methods for each VP.
The detailed profile is used when the structural move has enough bar data to create a detailed profile.
When not enough data exists, a mini profile is used. You can select only to use mini profiles if you prefer the style.
The internal logic to calculate each volume profile is similar. However, the detailed profile "scrunches" when not enough bar data exists to calculate it on - that's when mini profile takes over.
🔸Split Profile
You can also choose to show split volume profiles.
This is more similar to how a delta profile is shown. This is a styling preference only.
Rows Limit
Detailed profiles can use up to 500 rows.
Higher values were giving a "response too large" error, so I restricted the max to 500.
🔹Summary
That’s about it!
The goal of this script is simply to combine market structure with volume profiles and CVD so you can see how volume develops inside each structural move instead of across arbitrary time windows.
By anchoring profiles to BoS and CHoCH, you can study how participation builds during continuations, reversals, and rotations - and get a better feel for how each move was actually formed internally.
Hope you find it useful (:
Thank you guys and thank you TradingView!
Liquidity Thermal Map [BigBeluga]🔵 OVERVIEW
Liquidity Thermal Map visualizes where the highest traded volume has accumulated across price levels over a fixed lookback period.
Instead of plotting classic volume profiles with bars, the indicator builds a horizontal thermal heatmap directly on the chart, highlighting areas of strong and weak liquidity using smooth color gradients.
This makes it easy to identify high-interest price zones, volume clusters, and the dominant Point of Control (PoC) at a glance.
🔵 CONCEPTS
Price-Level Volume Aggregation — The indicator divides the entire price range of the selected lookback period into fixed horizontal bins.
Volume Binning — Each bin accumulates total traded volume whenever price closes near its midpoint.
Thermal Gradient Mapping — Volume intensity is translated into a color gradient, forming a continuous liquidity heatmap.
Point of Control (PoC) — The price level with the highest accumulated volume is highlighted using a distinct PoC color.
🔵 FEATURES
Liquidity Heatmap — Displays horizontal volume concentration directly on the chart background.
Fixed Resolution Bins — Uses 30 evenly spaced price levels to maintain a clean and readable structure.
Adaptive Lookback Period — Volume is calculated only within the user-defined historical window.
Two-Stage Color Gradient —
• Low volume → transparent / muted tones
• High volume → stronger, warmer colors
PoC Highlighting — The most traded price level is emphasized with a dedicated PoC color and volume label.
Range-Aware Scaling — Automatically adapts to the highest and lowest prices within the lookback period.
🔵 BUY / SELL LIQUIDITY SCALE
Directional Liquidity Breakdown — The vertical scale on the right side summarizes how total traded volume is distributed between bullish and bearish candles within the analyzed range.
Buy Liquidity (Green) — Represents the total traded volume during candles that closed higher than they opened.
This approximates aggressive buying pressure and shows how much volume has accumulated below the current price.
Sell Liquidity (Red) — Represents the total traded volume during candles that closed lower than they opened.
This reflects periods where selling pressure dominated and shows how much volume accumulated above the current price.
Liquidity Percentage — Each side displays the percentage share of total traded volume.
This helps quickly identify which side of the market controlled the majority of activity within the lookback range.
Volume Imbalance — The Imbalance value at the top shows the absolute difference between total buy and sell liquidity.
A larger imbalance suggests stronger directional dominance from either buyers or sellers.
Interactive Hover Details — Hovering over the liquidity bars reveals a tooltip showing the exact accumulated volume for that section (for example total liquidity below the current price).
This allows traders to quickly inspect how much volume has been concentrated on each side of the market.
Visual Pressure Gauge — The vertical red/green bar acts as a quick visual gauge of market pressure, allowing traders to instantly see whether buyers or sellers dominate liquidity within the selected range.
PoC Highlighting — The most traded price level is emphasized with a dedicated PoC color and volume label.
🔵 HOW TO USE
Identify Liquidity Clusters — Bright or dense zones indicate prices where significant trading activity occurred.
Support & Resistance Context — High-volume zones often act as reaction areas for price.
PoC Tracking — The PoC shows where the market spent the most time and volume.
Breakout Awareness — Moves away from dense liquidity areas may signal expansion into lower-volume zones.
Contextual Analysis — Use the heatmap as a background liquidity reference alongside trend or structure tools.
🔵 VISUAL LOGIC
Cooler Colors — Lower volume participation.
Warmer Colors — Higher volume concentration.
PoC Label — Displays the exact volume value of the strongest liquidity level.
🔵 CONCLUSION
Liquidity Thermal Map provides a clean, intuitive way to visualize where liquidity truly exists across price.
By transforming raw volume data into a continuous thermal layer, it helps traders quickly locate dominant trading zones, identify high-interest price levels, and better understand how volume is distributed within the market.
VIX Curve Pro - Real-Time Term Structure with StatisticsThis indicator displays the VIX term structure as a spatial curve directly on the chart, allowing you to instantly identify whether the volatility market is in contango or backwardation.
It shows the relationship between different VIX maturities (9D, 30D, 3M, 6M, 1Y) as a single curve.
It also shows some statistics and helps with market detection:
Historical percentile rankings for key VIX ratios
Real-time min/max/average/median values over lookback period
Current VIX term values with regime indicators
Understand where current conditions sit relative to historical context
Automatic identification of contango vs backwardation states
Visual indicators showing which part of the curve is inverted
Optional information guide explaining market states and trading implications
How to Use:
The curve shows the "shape" of volatility expectations across time. An upward-sloping curve (contango) means calm markets where longer-term volatility is priced higher than near-term. A downward-sloping curve (backwardation) shows market stress, where near-term volatility spikes above longer-term expectations.
Use the statistical tables to understand whether current ratios are at historical extremes (high percentile rank) or lows (low percentile rank), helping you gauge whether volatility structures are stretched or compressed.
Perfect for:
Volatility traders and options strategists
VIX futures and options traders
Understanding market fear and complacency levels
Timing volatility trades based on term structure
In the example above, I've added a chart with TVC:VIX , CBOE:VIX9D , CBOE:VIX3M and $CBOE:VIX6M. It is possible to see that although they are still in backwardation (short term vix is lower than long term), it might be close to flip. This kind of situation deserve extra attention. You can set alerts to when it flips.
This is a simple but useful indicator. Let me know if you have any questions!
Stop Loss Cascades (Breakouts) [Kioseff Trading]Hello friends and traders!
🔹Introduction
This indicator " Stop-Loss Clustering (Breakouts) " attempts to model trader stop-loss placement logic and identify price areas where a large amount of stop losses might cluster.
The idea is, if stop losses are indeed highly concentrated in a specific area, price extending through that area may produce high-velocity breakout conditions via forced order flow .
I'll cover this topic more thoroughly throughout the description. For now, just know that stop loss location & size data is not publicly available . Any model of their concentration locations is highly assumptive.
However, there's some reasonable academic research we can reference to make worthwhile estimates.
Academic references supporting the concepts discussed are listed at the end of this description. To maintain readability, I won't cite individual statements inline.
🔹The Premise
🔸Liquidity, Behavior, and Stop Cascades
Markets operate through a continuous limit order book , where two fundamental order types interact:
Limit orders , which provide liquidity by resting in the book
Market orders , which consume liquidity by exhausting those resting orders
This mechanical interaction drives price movement - incoming order flow consuming available liquidity .
This begs the question.. Does liquidity distribute evenly across the LOB?
If it did : If liquidity were evenly distributed, price impact could be modeled as a relatively smooth function of incoming order flow.
But it doesn’t : Liquidity is unevenly distributed. Academic research supports this claim and, regardless, this is an intuitive conclusion most traders arrive at.
Liquidity forms localized concentrations and gaps.
Liquidity concentrations are commonly referenced as: liquidity shelves , liquidity clusters , liquidity zones .
Liquidity gaps are commonly referenced as: liquidity vacuums , thin book zones .
As a result, identical order flow can produce very different price movements depending on the state of the order book.
Let’s consider an example..
Assume price is trading at $99.
The price levels $100, $101, $102 have resting sell limit order concentrations of 100.
This is where you come in.
You execute a market order buy for 300 size.
Your order first exhausts all sell-side resting order concentrations at the $100 level.
You still have 200 size that needs to be filled, and the ask price has moved from $100 to $101.
Your order will now sequentially exhaust available liquidity at the $101 level, the ask price will increase to $102, and your final 100 size will exhaust the $102 level.
To keep the example simple, we’ll say that your order moved price from $99 to $102, and now the ask price is $103.
But, you still want to accumulate.
The nearest sell-side levels in the LOB are $103, $104, $105.
The $103 level has a sell limit order concentration of 500.
$104 and $105 both have concentrations of 50.
You execute your same market order buy for 300 size.
This time, price doesn’t move.. At all..
Instead, you consumed 300 of the 500 size at $103 with your order, and the level remains a barrier.
Your order was absorbed by available liquidity.
This example demonstrates how price movement depends on available liquidity , not simply the size of incoming orders.
In the first scenario, liquidity was thin and the order walked through multiple price levels, causing price to move quickly.
In the second scenario, a large concentration of resting liquidity absorbed the same order, preventing price from advancing.
🔸Liquidity Does Not Distribute Evenly
Alright, we understand that liquidity doesn’t distribute evenly. And we understand that high concentrations of liquidity can act as price barriers (liquidity shelves) while sparse liquidity can permit rapid price movement - we saw this in our example above.
There’s an important question we should ask next before we move on..
If liquidity distributes unevenly, then where does it tend to cluster? And where does it tend to thin?
Of course, knowing these tendencies provides multi-purpose advantages.
If price approaches a liquidity vacuum - a local block of the order book with thin resting liquidity - rapid price movement can occur without requiring unusually strong aggressive order flow.
If price approaches a liquidity shelf - a local block of the order book with thick resting liquidity - price can stall or contract even if the same level of aggressive order flow that previously moved price continues.
With this in mind, order flow intensity alone does not determine price movement . The distribution of liquidity across surrounding price levels plays a similarly important role.
So, is there any evidence of where liquidity tends to concentrate ?
🔸Empirical Observations
Empirical research on limit order books shows that liquidity does not distribute smoothly across the LOB . Instead, depth tends to concentrate at specific price levels, producing irregular profiles with localized peaks in resting liquidity.
These concentrations arise because order placement is not random . Traders frequently anchor decisions to widely observed reference prices such as:
• prior highs
• prior lows
• round numbers
• widely referenced price extremes
Because many traders monitor the same price history, order placement decisions often reference similar price levels.
This concept is simpler than it sounds.
Let’s use market structure traders for example.
Market structure traders frequently reference prior swing highs and swing lows when making decisions about entries, exits, and risk.
A trader entering a long position may place their stop-loss below a recent swing low , reasoning that if price breaks that level, the trade idea is invalidated.
A trader entering a short position may place their stop-loss above a recent swing high for the same reason.
Timeframe price aggregation may differ; however, we’re all looking at roughly the same recent highs and lows when evaluating a chart (structure).
When many traders collectively reference the same prices, orders may accumulate near those levels. This produces localized depth concentrations, which traders refer to as liquidity shelves .
Liquidity shelves act as temporary barriers where the book contains disproportionately large resting liquidity compared to surrounding prices.
🔸Research documenting liquidity clustering includes :
Bourghelle & Cellier (2007) , who find that limit orders cluster at prominent price levels (especially round numbers), creating localized depth concentrations that can act as price barriers.
Kavajecz & Odders-White (2004) , who demonstrate that prices identified as support or resistance coincide with higher resting limit order depth
These findings suggest that many commonly observed price levels may correspond to real concentrations of liquidity rather than being purely visual artifacts on a chart.
Kavajecz & Odders-White (2004) is an important observation for support/resistance traders!
Kavajecz & Odders-White (2004) show that levels traders commonly call support and resistance often align with areas where more limit orders are resting in the order book.
This suggests a plausible mechanical pathway through which support and resistance levels can emerge!
🔸Liquidity Shelves and Price Interaction
When liquidity clusters around a price level, the resulting liquidity shelf can influence how price behaves when it approaches that area.
Price interaction with these shelves is state-dependent :
If incoming order flow is absorbed, price may stall or reverse
If resting liquidity is consumed, price may transition rapidly to the next liquidity zone
Once a shelf is depleted, follow-through can accelerate due to thinner liquidity beyond the level
Research on order book dynamics supports this mechanical view of price movement.
For example:
Jean-Philippe Bouchaud, J. Doyne Farmer, and Fabrizio Lillo (2009) demonstrate that price impact emerges from the interaction between order flow and finite liquidity
From this perspective, price does not move simply because a level is crossed.
Price moves because available liquidity at that level has been consumed.
🔸Latent Liquidity and Stop Clustering
In addition to visible liquidity from limit orders, markets also contain latent liquidity .
This is where ”Stop-Loss Clustering (Breakouts)” becomes important - we’re almost done!
Latent liquidity consists of conditional orders such as stop-losses that are not visible in the order book until triggered .
Although these orders aren’t public information, empirical studies show that stop orders tend to cluster near widely referenced price levels .
Research by Carol Osler (2001, 2002) using institutional FX order data finds that stop-loss orders frequently accumulate just beyond salient price levels such as prior highs and lows.
When these stops trigger, they convert into aggressive market orders and can generate bursts of directional order flow that may accelerate price movement.
🔸Stop-Loss Cascades
Stop losses add another layer of latent order flow that isn’t visible in the order book until it triggers.
If enough of them sit around the same price area.. Think “hidden pressure” waiting to activate. Nothing happens while price trades nearby, but once that level is traded at, those stops convert into market orders and immediately begin consuming available liquidity.
This matters because stop placement is unlikely to be random in most instances. Traders frequently anchor stops to widely observed prices such as prior highs, prior lows, or other prominent structure points, or use volatility methods such as ATR, etc.
So when price approaches one of these areas, two things can happen.
If the resting liquidity there is large enough, the incoming orders can be absorbed and price may stall or reject.
But if that liquidity gets consumed, the stops sitting just beyond the level begin triggering. Those triggered stops add additional market orders, which consume more liquidity and can push price further into the next layer of stops.
This creates a cascading effect:
price reaches a stop cluster
stops trigger and convert into market orders
liquidity gets consumed faster
price moves further, triggering more stops
When this chain reaction starts, price can transition very quickly from a slow battle near the level to rapid expansion through it.
This is one of the mechanical reasons why some reference-point breaks barely move, while others accelerate rapidly.
🔹How It Works
Now that we understand the why - let’s discuss how the indicator works.
🔸Absorbtion Extremes
The image above shows the absorption extremes model.
In this model, the indicator treats recent & relevant swing points as plausible stop clustering candidates.
You can find similar swing point identification mechanics in other indicators.
However, this model assigns subsequent volume to the swing level after its formation.
There are limitations and assumptions - let’s go over them.
The images above explain how the indicator determines the intensity of a possible stop-cluster around a swing level.
There are limitations and assumptions
1: The indicator assigns all “directional volume” to a swing level after it’s formed and while it remains the closest active swing point to the current price.
“Buy volume” is assigned to the closest active swing low.
“Sell volume” is assigned to the closest active swing high.
I say “buy volume” and “sell volume” because there’s assumptions on what constitutes the relevant classification.
The indicators follow the traditional two-region tick model for classifying buy volume and sell volume.
Higher close = “buy volume” proxy
Lower close = “sell volume” proxy
Depending on the granularity you select (the indicator is capable of using tick data), this model can be more/less accurate.
However, even with tick-level data and bid/ask quotes, trade direction must still be inferred using classification rules. Because some trades occur inside the spread or involve hidden liquidity, perfect classification is not possible without exchange aggressor flags.
For assumptions..
The model assigns ALL classified volume to the swing level.
In reality, traders use a wide range of risk management methods, and not every position will place a stop loss directly at the most recent swing point. ATR-based stops, percentage-based stops, and other volatility-based methods are also common.
Because the true distribution of stop placement is unobservable, the model assumes that positions entered are structurally invalidated at the closest swing level based on their classified direction.
As a result, the values displayed by the indicator should be interpreted as relative proxies for potential stop concentration, rather than precise estimates of actual stop-loss size.
The displayed magnitudes are intentionally exaggerated and comparative, designed to highlight where stop pressure may accumulate relative to other levels.
The images above show how to interpret the indicator when using this model.
The image above shows the triggered stop-cluster graph.
Each point corresponds to a triggered stop-cluster - assuming it exists.
The greater the size attached to that cluster, the further distant the data point is placed.
Far away from zero line = large size.
Close to zero line = low size.
Radiating/glowing points indicate a potentially large cluster trigger.
🔸 Volatility-At-Entry Model (Time Scaled)
The Volatility-At-Entry model uses ATR scaled by various timeframes to predict plausible stop loss placements.
For this model, the indicator uses the same tick classification model to assign volume directionally.
Volume is then dispersed across six common timeframes (1m, 5m, 15m, 30m, 1h, 4h) and 3 common ATR multiples for risk management (1ATR, 1.5ATR, 2ATR).
This model assumes traders are entering positions across various timeframes and are scaling risk congruent with those timeframes.
For instance,
A trader using the 1-minute chart for opportunity is more likely to use a stop loss closer to entry than a trader using the 4-hour chart for opportunity.
If this assumption is reasonable to you - great, we can move forward!
The image above visualizes the model.
Purple-shaded regions indicate a price area with less opportunity for stop loss clustering. Either transaction intensity around eligible price areas was low, or position accumulation wasn’t given sufficient time.
Pink-shaded regions indicate a price area with greater opportunity for stop loss clustering. Volume was significant around these regions or price has traded within proximity for extended periods.
This model naturally shows more future opportunity than historical outcomes. You can select to show historical outcomes in the settings, this image shows examples of such outcomes.
The image above shows the triggered stop loss graph in effect for this model. Stop clustered are distributed across more price areas with this model - from low intensity to high intensity. Therefore, a cluster is almost always “triggering” to some degree.
A classification model for what’s typical and what’s unusual is used for the graph in this case. Radiating points always indicate large stop clusters triggered. Anything within the green/pink line indicates usual size.
Typical Move
The image above explains the nearest cluster information table.
The size and location of the nearest buy-stop cluster and sell-stop cluster are recorded.
Additionally, the indicator identifies whether clusters of similar size were triggered in the past, and how price behaved following those events.
Since all models here are highly assumptive, and similar sized clusters might only have one or two relative neighbors, treat these measurements as a description of history rather than a prediction.
The model takes the logarithm of the current stop-volume (buy or sell) to normalize its scale and compare it with a historical dataset of previously observed stop-volume sizes that have also been log-scaled.
It then identifies historical observations whose sizes are most similar to the current value, either by selecting all observations within a tolerance range around that value (where the range is based on the typical spacing between historical observations), or by selecting the single closest match.
Finally, the model retrieves the historical price moves associated with those matched observations, producing a sample of “typical moves” that occurred when stop-volume magnitude was similar to the current situation.
Ratio Meter
The stop-cluster ratio meter shows the current sum of active and triggered all buy-side clusters and sell-side clusters.
This meter is useful for quick scanning across assets to see if active or recently triggered stop clusters are lopsided.
Additional Features
The single most important setting outside model selection is the lower timeframe used to retrieve volume from.
This setting is set to 1-minute data by default because it works with paid and free plans. If you want better granularity, I strongly suggest changing this setting to either 1-second or 1-tick. This will sacrifice the number of identifiable cluster locations, because better granularity data has less programmatically retrievable values.
🔹Closing Remarks
Stop-loss clustering is an appealing concept because it offers a plausible explanation for why some breakouts accelerate so quickly while others stall. When a large number of conditional orders sit near the same price, a breakout through that area can trigger a cascade of market orders that rapidly consume liquidity and push price toward the next available zone.
However, it’s important to remember that the models used in this indicator are approximations, not direct measurements. True stop-loss locations and sizes are not publicly observable, and many traders use different risk management techniques that cannot be perfectly inferred from chart data alone. The goal of this indicator is therefore not to identify exact stop locations, but to highlight price areas where stop pressure may plausibly accumulate relative to surrounding levels.
Like any model based on behavioral assumptions and historical observations, results should be interpreted probabilistically. Large clusters do not guarantee breakouts, and small clusters do not guarantee quiet price behavior. Instead, the indicator is best used as a tool for context and situational awareness.
References
General Microstructure and Price Formation
Madhavan, A. (2000). Market microstructure: A survey. Journal of Financial Markets, 3(3), 205–258.
O'Hara, M. (1995). Market Microstructure Theory. Blackwell.
Biais, B., Glosten, L., & Spatt, C. (2005). Market microstructure: A survey of microfoundations, empirical results, and policy implications. Journal of Financial Markets, 8(2), 217–264.
Limit Order Books and Liquidity as Resting Orders
Gould, M. D., Porter, M. A., Williams, S., McDonald, M., Fenn, D. J., & Howison, S. D. (2013). Limit order books. Quantitative Finance, 13(11), 1709–1742.
Rosu, I. (2009). A dynamic model of the limit order book. Review of Financial Studies, 22(11), 4601–4641.
Biais, B., Hillion, P., & Spatt, C. (1995). An empirical analysis of the limit order book and the order flow in the Paris Bourse. Journal of Finance, 50(5), 1655–1689.
Liquidity Clustering and Depth Concentration
Kavajecz, K. A., & Odders-White, E. R. (2004). Technical analysis and liquidity provision. Review of Financial Studies, 17(4), 1043–1071.
Bourghelle, D., & Cellier, A. (2007). Limit order clustering and price barriers on financial markets. Working paper / SSRN.
Order Flow and Price Impact
Bouchaud, J.-P., Farmer, J. D., & Lillo, F. (2009). How markets slowly digest changes in supply and demand. In Handbook of Financial Markets: Dynamics and Evolution.
Stop Orders and Price Cascades
Osler, C. L. (2003). Currency orders and exchange-rate dynamics: Explaining the success of technical analysis. Journal of Finance, 58(5), 1791–1819.
Osler, C. L. (2005). Stop-loss orders and price cascades in currency markets. Journal of International Money and Finance, 24(2), 219–241.
Liquidity Provision and Execution
Ho, T., & Stoll, H. (1981). Optimal dealer pricing under transactions and return uncertainty. Journal of Financial Economics, 9(1), 47–73.
Almgren, R., & Chriss, N. (2000). Optimal execution of portfolio transactions. Journal of Risk, 3(2), 5–39.
Menkveld, A. J. (2013). High frequency trading and the new market makers. Journal of Financial Markets, 16(4), 712–740.
Behavioral Anchoring and Attention
Kahneman, D., & Tversky, A. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
Barber, B. M., & Odean, T. (2008). All that glitters: The effect of attention and news on the buying behavior of individual and institutional investors. Review of Financial Studies, 21(2), 785–818.
George, T. J., & Hwang, C. Y. (2004). The 52-week high and momentum investing. Journal of Finance, 59(5), 2145–2176.
Mizrach, B., & Weerts, S. (2007). Highs and lows: A behavioral and technical analysis. SSRN working paper.
Swing Profile [BigBeluga]🔵 OVERVIEW
Swing Profile is a dynamic swing-based volume profiling tool that builds a complete volume profile for each completed market swing.
Instead of using fixed sessions or time ranges, the indicator anchors its profile strictly between confirmed swing highs and swing lows, allowing traders to analyze where volume accumulated inside each directional leg.
The profile updates in real time while a swing is still forming and finalizes once the swing direction flips, giving both historical and live insight into volume behavior.
🔵 CONCEPTS
Swing-Anchored Profiling — Volume is calculated only between confirmed swing highs and lows detected by the Swing Length input.
Directional Legs — Each bullish or bearish swing leg gets its own independent volume profile.
ATR-Adaptive Bins — Profile bin size is automatically scaled using ATR, keeping resolution consistent across volatility regimes.
Real-Time Rebuild — While a swing is still active, the profile continuously recalculates and redraws.
Finalized Profiles — Once direction flips, the profile is locked and marked as a completed swing.
🔵 FEATURES
Swing Volume Profile — Displays horizontal volume distribution for each swing leg.
Point of Control (PoC) — Highlights the price level with the highest traded volume inside the swing.
Buy / Sell Volume Separation — Tracks bullish (buy) and bearish (sell) volume inside each profile.
Delta Volume Calculation — Shows net buying vs selling pressure as a percentage.
Profile Outline — A polyline traces the outer shape of the volume distribution.
HeatMap Mode — Optional heatmap visualization showing volume intensity by color gradient.
ZigZag Swing Connector — Visual connection between swing highs and lows for structure clarity.
Custom Label Sizing — Adjust label size (Tiny → Huge) for clean chart scaling.
🔵 HOW TO USE
Identify High-Interest Zones — Use the PoC to locate price levels where the market spent the most time during a swing.
Trend Strength Analysis — Strong directional swings often show volume skewed toward one side of the profile.
Pullback Zones — Profiles help identify areas where price may react during retracements.
Continuation vs Reversal — Delta volume reveals whether buying or selling dominated the swing.
Live Monitoring — While a swing is forming, watch the real-time profile to anticipate where structure may complete.
🔵 DATA LABELS
T — Total traded volume inside the swing.
B — Buy volume (bullish candles).
S — Sell volume (bearish candles).
D — Delta volume (% difference between buy and sell volume).
🔵 CONCLUSION
Swing Profile delivers a precise, structure-aware view of volume by anchoring profiles directly to market swings.
By combining real-time profiling, PoC detection, delta analysis, and adaptive resolution, it provides deep insight into where participation truly occurred — making it a powerful tool for swing traders, structure traders, and volume-focused strategies.






















