Why Spark should be your first move
Profiling is how you stop guessing. Paper documents Spark as the preferred profiler and bundles it starting with 1.21.
That changes the usual admin workflow:
- you no longer need to jump straight into random config changes
- you can capture evidence from the live server
- you can hand a real report to a plugin author or support channel if needed
Before you start
Profile only when the issue is happening. If the server feels fine right now, the report will mostly confirm that it feels fine.
That is why Paper explicitly warns that profiling is only effective while the problem is actively occurring.
Step 1: Confirm the lag signal
Run:
/spark tps
This gives you the immediate health view and tells you whether you are dealing with sustained low TPS, spikes, or mostly normal behavior.
If you do not currently have practical admin access, set up RCON and console access in Docker first.
Step 2: Start a timed profile
Paper's own example is:
/spark profiler start --timeout 600
That records for ten minutes and then returns a report URL.
Ten minutes is long enough to catch:
- repeating scheduled tasks
- heavy chunk exploration
- farms under real player load
- longer GC patterns
Step 3: Reproduce the actual problem
While the profiler is running, do not leave the server empty unless emptiness is the problem. Let players keep doing the thing that caused the lag:
- exploring new terrain
- using a farm
- entering a busy base
- running a minigame
You want the profile to capture the expensive behavior, not the quiet aftermath.
Step 4: Read the report in the right order
Do not start by staring at every number. Use this order:
- overall tick health
- top hot paths or heavy tasks
- plugin-specific or chunk-specific concentration
You are looking for concentration, not randomness. If one plugin, one subsystem, or one region dominates the report, that is your first real lead.
Step 5: Turn findings into one concrete change
Good first actions are specific:
- lower
simulation-distance - disable one plugin feature
- limit a farm design
- pre-generate chunks before an event
Bad first actions are vague:
- “give it more RAM”
- “optimize everything”
- “paste someone else’s config pack”
Docker-specific note
Spark runs inside the Paper server process, so Docker does not change the profiling method much. What Docker does change is how you access the console and logs.
Useful commands around a profiling session:
docker compose logs -f mc
docker exec mc rcon-cli "spark profiler start --timeout 600"
Common mistakes
| Mistake | Why it hurts | Better move |
|---|---|---|
| Profiling an idle server | Report misses the real bottleneck | Profile under active load |
| Making many fixes at once | You never learn what mattered | Apply one high-confidence change first |
| Ignoring chunk and entity context | The hot path stays abstract | Relate the report to in-game behavior |
| Treating every red number as equally important | You drown in noise | Look for dominant consumers first |
FAQ
What if Spark shows mostly normal values, but players still complain?
Then the issue may be intermittent, network-related, or outside the server tick path. Repeat the profile during the actual bad window.
Should I still use Timings?
Paper documents Timings as deprecated in favor of Spark and notes that it is harder for beginners to read.
Next steps
- If you still need the basics of TPS and MSPT, read Minecraft Server Lag Explained.
- If the report points to chunk simulation pressure, go to View Distance vs Simulation Distance.
- If your admin path is clumsy, improve it with RCON and console access in Docker.