LTX VIDEO 2.3 · LORA TRAINER

Train a LoRA for LTX Video 2.3 in one afternoon.

Drop in a folder of clips. Grix captions them, runs a Train LTX job on fal.ai, and hands you back a LoRA that drops into any LTXV endpoint — fast or quality, start-end frame, reference video, lip sync.

From $5 · Distilled + full · Safety-checked · Never used to train public models

TRAINING · STEP 1800 / 2000loss 0.041
eval @ step 1800
step 400
step 1200
step 1800
G
Grix:loss flattened around step 1500 — I'd stop at 1800 and save credits. Want to?
lora/sora-camera-r4ltxv 2.332 clips~14 min
LTXV 2.3 distilled & full ·Train LTX on fal ·auto-captions ·character LoRAs ·style LoRAs ·motion LoRAs ·brand LoRAs ·start + end frame ·reference video ·lip sync ·extend ·LTX-2 audio-video coming soon ·Wan 2.2 coming soon ·Hunyuan coming soon ·$5 testing pack ·LTXV 2.3 distilled & full ·Train LTX on fal ·auto-captions ·character LoRAs ·style LoRAs ·motion LoRAs ·brand LoRAs ·start + end frame ·reference video ·lip sync ·extend ·LTX-2 audio-video coming soon ·Wan 2.2 coming soon ·Hunyuan coming soon ·$5 testing pack ·

Pick what you're actually training for.

Each recipe pre-sets learning rate, steps, and captioning prompt. Change anything later.

Four steps, no cluster.

01
data

Drop clips

Upload 10–40 clips or a folder. Anything LTXV likes: 24 fps, 2–5 seconds, any aspect ratio. Video or image sequences both work.

02
prep

Captions, auto

A captioner tuned for LoRA work writes trigger tokens and descriptions — the kind that actually help training, not just describe frames. Every caption is editable.

03
train

Train on fal

Your job runs on Train LTX. Pick Fast (~$1.20) or Quality (~$5.60). Grix monitors loss in real time and tells you when to stop.

04
ship

Test, ship, edit

One composer bar for every LTXV feature: start/end frame, reference, lip sync, extend. Export .safetensors anytime.

THE STUDIO

One composer bar. Every LTXV trick.

Type a prompt. Pick a LoRA. Drop a start frame, an end frame, a reference video. Hit go. Fast mode is distilled, quality is full — nothing else to learn.

Open the studio →
Fast / Qualitydistilled or full
Start + end frameanchor your shot
Reference videostyle or motion
Lip syncspeech to face
Extendgrow a clip
Editin-place changes
fast · 25crwarehouse pan
quality · 70crcourtyard sun
lip synctake 4
lora/sora-camera-r4slow push-in across a rain-slick courtyard…
FastQuality
~70cr

The trainer that follows the LTX 2.3 recipe.

Grix uses fal's Train LTX endpoint directly — the same pipeline Lightricks ships for LTXV 2.3. That means correct bucketing, the right captioning format, and a LoRA that runs on both the distilled and full checkpoints.

Ready to move to Wan 2.2, Hunyuan Video, or LTX-2 (Lightricks' audio-video model) later? Same studio, same captioning, same exports. The endpoint changes under the hood — nothing you need to manage.

LTXV 2.3 full
fal-ai/ltxv-13b-098
LTXV 2.3 distilled
fal-ai/ltxv-13b-098-distilled
Start + end frame
ltxv-13b/image-to-video
Extend
ltxv-13b/extend
Lip sync
ltxv-13b/lipsync
Training
fal-ai/ltxv-trainer

Shown for transparency — you never need to touch these. One composer handles them all.

Cheap, complete, obvious.

Grix LoRA
Raw fal + CLI
Video gen apps
$5 testing pack
LLM training assistant
Built-in studio for testing
Fast + Quality modes
Start / end frame + ref video
Lip sync + extend + edit
Captions tuned for LoRA
Safety-checked outputs
Credits roll over

Three things we won't compromise on.

01

Honest pricing

Credits map 1:1 to what the GPU actually costs us, plus a margin. No fake trial that loses money, no "contact us" enterprise tier, no surprise overages.

02

Captioning is the job

Most bad LoRAs are bad captions. We wrote a captioner tuned specifically for LoRA training — every caption is editable before you launch a job.

03

No lock-in

Every LoRA exports as .safetensors. Use it on fal, in ComfyUI, or anywhere LTXV 2.3 runs. Your clips are never used to train public models.

Credits. No drama.

One currency for training, testing, generating. Fast training is ~120 credits (~$1.20). Quality is ~560 credits (~$5.60).

Starter
$5
One-time credit pack
1 fast training run + studio time
Get Starter
Studio
$99/mo
9,900 credits / month
Teams, agencies, weekly drops
Get Studio

Cancel anytime · Unused credits roll over · Your clips are never used to train public models

Answers.

What is Grix LoRA?

A hosted trainer for video LoRAs. You upload clips, Grix captions them and runs the Train LTX job on fal.ai, and you get a .safetensors LoRA that drops into LTX Video 2.3 — plus a studio to test it without ever leaving the app.

How is this different from running Train LTX myself?

Nothing fundamental — it's the same Train LTX endpoint under the hood. Grix handles the captioning, picks sensible steps and learning rates for your use case, monitors the loss and tells you when to stop, and gives you a studio to test the LoRA the second it finishes.

Which models do you support?

LTX Video 2.3 is live (distilled and full). LTX-2 (Lightricks' audio-video model), Wan 2.2, and Hunyuan Video are on the roadmap — same studio, same captioning, same exports.

How good are the captions?

We built a captioner specifically for LoRA training — short trigger tokens plus the descriptive detail that training actually needs. You can edit every caption before kicking off a job.

How much does training actually cost?

Fast mode runs about 120 credits (~$1.20) for a 10-clip character LoRA — roughly 12 minutes on fal's Train LTX. Quality mode runs a longer schedule at around 560 credits (~$5.60). You set a credit cap; Grix monitors loss and stops early if it has converged.

Can I download my LoRA?

Yes. Every completed job has a .safetensors download. Use it on fal, on ComfyUI, or anywhere LTXV 2.3 is supported.

Is there a free tier?

The $5 starter pack gives you roughly one Fast training job and some studio time. You keep the LoRA regardless. There is no time-limited free trial that locks you out.

Bring the clips. We'll train the LoRA.

$5 starter pack covers your first fast training job. Keep the LoRA regardless.