Use Spark Profiler to Find Minecraft Lag in Paper and Docker

Paper bundles Spark starting with 1.21, which makes it the preferred profiler for diagnosing lag. This guide shows how to capture a useful report and how to read the first findings without guesswork.

Performance
setupmc.com Team

Need a cleaner baseline?

Generate a Compose setup before you keep patching by hand

If you are still refining the server baseline, use the configurator to create a cleaner Docker Compose setup and then return to the guides for the next issue.

Open configurator

Why Spark should be your first move

Profiling is how you stop guessing. Paper documents Spark as the preferred profiler and bundles it starting with 1.21.

That changes the usual admin workflow:

  • you no longer need to jump straight into random config changes
  • you can capture evidence from the live server
  • you can hand a real report to a plugin author or support channel if needed

Before you start

Profile only when the issue is happening. If the server feels fine right now, the report will mostly confirm that it feels fine.

That is why Paper explicitly warns that profiling is only effective while the problem is actively occurring.

Step 1: Confirm the lag signal

Run:

/spark tps

This gives you the immediate health view and tells you whether you are dealing with sustained low TPS, spikes, or mostly normal behavior.

If you do not currently have practical admin access, set up RCON and console access in Docker first.

Step 2: Start a timed profile

Paper's own example is:

/spark profiler start --timeout 600

That records for ten minutes and then returns a report URL.

Ten minutes is long enough to catch:

  • repeating scheduled tasks
  • heavy chunk exploration
  • farms under real player load
  • longer GC patterns

Step 3: Reproduce the actual problem

While the profiler is running, do not leave the server empty unless emptiness is the problem. Let players keep doing the thing that caused the lag:

  • exploring new terrain
  • using a farm
  • entering a busy base
  • running a minigame

You want the profile to capture the expensive behavior, not the quiet aftermath.

Step 4: Read the report in the right order

Do not start by staring at every number. Use this order:

  1. overall tick health
  2. top hot paths or heavy tasks
  3. plugin-specific or chunk-specific concentration

You are looking for concentration, not randomness. If one plugin, one subsystem, or one region dominates the report, that is your first real lead.

Step 5: Turn findings into one concrete change

Good first actions are specific:

  • lower simulation-distance
  • disable one plugin feature
  • limit a farm design
  • pre-generate chunks before an event

Bad first actions are vague:

  • “give it more RAM”
  • “optimize everything”
  • “paste someone else’s config pack”

Docker-specific note

Spark runs inside the Paper server process, so Docker does not change the profiling method much. What Docker does change is how you access the console and logs.

Useful commands around a profiling session:

docker compose logs -f mc
docker exec mc rcon-cli "spark profiler start --timeout 600"

Common mistakes

MistakeWhy it hurtsBetter move
Profiling an idle serverReport misses the real bottleneckProfile under active load
Making many fixes at onceYou never learn what matteredApply one high-confidence change first
Ignoring chunk and entity contextThe hot path stays abstractRelate the report to in-game behavior
Treating every red number as equally importantYou drown in noiseLook for dominant consumers first

FAQ

What if Spark shows mostly normal values, but players still complain?

Then the issue may be intermittent, network-related, or outside the server tick path. Repeat the profile during the actual bad window.

Should I still use Timings?

Paper documents Timings as deprecated in favor of Spark and notes that it is harder for beginners to read.

Next steps

Frequently asked questions

Short answers to the questions that usually come up while working through this topic.