Check out the latest model drops and powerful integrations.
With support for remote inference LoRAs and direct downloads from Hugging Face and Civitai, Scope now offers a highly flexible LoRA workflow.
LoRA (Low-Rank Adaptation) is a lightweight fine-tuned model trained on a smaller, specialized dataset. Rather than replacing your base model, it modifies it in a targeted way — adding stylized effects, emphasizing specific aesthetics, or improving quality in a narrow domain.
Sometimes a LoRA alone outperforms the base model. More often, the strongest results come from combining the two. Think of it as controlled specialization layered on top of a general model.
Permanent Merge bakes the LoRA into the model at a fixed strength. It can't be adjusted while Scope is running. You get better FPS and stable results, but no live control or dynamic transitions. Best when you already know the exact strength you want.
Real-Time PEFT lets you adjust LoRA strength live, animate influence over time, and transition between visual states. FPS takes a hit, but the creative flexibility is significantly higher.
Each LoRA has a sweet spot. Too low and the effect is barely visible. Too high and it overpowers everything.
For example, a slime-style LoRA might look best around 1.5. Scope allows values up to 5, but at that level the effect typically dominates the entire image.
Scope also supports negative strength values, which reverse or suppress a LoRA's learned features. With real-time PEFT, you can animate smoothly from negative through zero to positive — enabling dynamic style modulation and smooth visual transitions.
Scope allows multiple LoRAs loaded simultaneously, opening up blending, crossfading, style transitions, and hybrid aesthetics. Interactions vary by model, so experimentation is required — but it can lead to highly original results.
Download from Hugging Face or Civitai, ensure the LoRA matches your base model architecture, and load via remote inference or local integration. If the base model, LoRA type, and inference method mismatch, the LoRA won't function correctly.