I found the source of the stutter.
It's the snapTime / targetTime. snapTime has fixed intervalls. displayTime does not. This creates a targetTime that constantly bounces arround.
Only set targetTime if needed.
Here's what I did with it. "Hotfix" style:
Instead of a hotfix I would suggest not trying to time interpolate the way it is done in this test at all. Do it like you did further down with the position.
If there is a network change decide what new targetTime should be set. Then interpolate that the classical way. Without moving the ballposts! Then when you reach the new time target you will be exactly there. If you want repeat ... but only after you reached the end. Your current code can't interpolate exactly to a certain time. Even if it did get precise input, which it does not.
After checking several possible sources. Mostly concerning clockkeeping and time smoothing I found the source of the stutter.
I did this by just bypassing all the smoothing and setting (displayTime = targetTime).
The faster speed or latency. The higher the stutter of targetTime. So this matches with observations.
Here you see it in action. (targetTime-displayTime) if (smoothTimerState) is off.
With the creation of the first snapshot targetTime is the same as displayTime.
With creation of every next snapshot targetTime is set to (snapTime - Settings.timeOffset).
(snapTime) stems from the update function on the server. It is calculated by multiplying the snap id with the snap rate. Giving you a time when this snap might be valid.
Problem is that this is not when the time interpolate is run! The time interpolate is run constantly. This means that it constantly tries to interpolate to times that only changes with the snapshot update rate.
Here you can see how the stutter coincydes with the client using a snapshot. This matches with the rate of stutter. In this test (displayTime = targetTime).
It appears Flavien wants to constantly regulate displayTime. In that case targetTime needs to be selectively regulated.
I added this code to the targetTime. It eliminates stutter. It reacts to spikes after 0.1 seconds by detecting if extrapolation is activated and if the difference between targetTime and displayTime is >0.06 seconds.
I also added feedback loop that uses an average of multiple snaps. It is more precise than using snapTime but It adds some overshoot and bounce.
It's all cobbled together really.
A side note:
This elapsed time is only accurate to milliseconds.
So I tried to make it more accurate
Elapsed.TotalMilliseconds is a double. As accurate as possible based on the actual Stopwatch ticks.
After some tests though I couldn't really see any noticeable difference.
Also. The player ship should also be subject to these time adjustments. Currently if the client lags out everyone else is set into the past except for him. If you want to keep aproaching this problem this way I would suggest also having the player ship be influenced by the different displayTime.
Also also. I would reconsider this "shifting time" system. It can be quite complicated to debug when there's one important clock that gets modified often ... maybe abstract it a little so at least you can compare how far apart past packets arrived and such.
Making players play in the past is fine and dandy. But the way it's set up right now is kind of nightmarish.
Another option would be to try and make snapTime more predictable. I don't currently have an idea how to do that. Hope my time on this has helped some.