Discussion:
NIST time services
(too old to reply)
Tom Van Baak
2014-03-19 05:18:23 UTC
Permalink
If you can design a system that can handle 6.5 billion requests per day, this opportunity is for you...


https://www.fbo.gov/spg/DOC/NIST/AcAsD/RFI_InternetTimeServiceComments/listing.html

Solicitation Number: RFI_InternetTimeServiceComments

Synopsis:
Added: Mar 18, 2014 9:46 am

SUMMARY: National Institute of Standards and Technology (NIST), Department of Commerce, seeks information from the public on NIST's potential transition of time services from a NIST-only service to private sector operation of an ensemble of time servers that will provide NIST-traceable time information in a number of different formats over the public Internet.
Didier Juges
2014-03-19 12:07:13 UTC
Permalink
I would, but I don't have the time at the moment :)

Didier KO4BB
Post by Tom Van Baak
If you can design a system that can handle 6.5 billion requests per
day, this opportunity is for you...
https://www.fbo.gov/spg/DOC/NIST/AcAsD/RFI_InternetTimeServiceComments/listing.html
Solicitation Number: RFI_InternetTimeServiceComments
Added: Mar 18, 2014 9:46 am
SUMMARY: National Institute of Standards and Technology (NIST),
Department of Commerce, seeks information from the public on NIST's
potential transition of time services from a NIST-only service to
private sector operation of an ensemble of time servers that will
provide NIST-traceable time information in a number of different
formats over the public Internet.
_______________________________________________
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Sent from my Motorola Droid Razr 4G LTE wireless tracker while I do other things.
Jim Lux
2014-03-19 13:00:33 UTC
Permalink
Post by Tom Van Baak
If you can design a system that can handle 6.5 billion requests per day, this opportunity is for you...
https://www.fbo.gov/spg/DOC/NIST/AcAsD/RFI_InternetTimeServiceComments/listing.html
Solicitation Number: RFI_InternetTimeServiceComments
For those that are unfamiliar with the ways of US Government
contracting, this is NOT a request for proposals to provide the service.
It's more of a preliminary step to identify potential bidders, gather
background information, shake the trees to find out who the potential
players are, as well as help make a decision on whether it's even a good
idea. (e.g. it might feature into a make vs buy report)

We do this at NASA when we want to make sure we haven't missed something
in the marketplace. These days, Government folks don't go to as many
conferences and almost no trade shows. So you find out what's available
by exercising google or bing, which is not such a great way to find
niche products and services. Someone who's got a great way to do
reliable, accurate time distribution over the internet might not have a
big web presence, or might be overshadowed by something similar that has
been heavily Search Engine Optimized.

This is also a way for people who aren't necessarily interested in
providing the service to give comments to NIST about potential issues
that may be of concern. Those kinds of comments might wind up changing
the eventual procurement, or might even result in a report that says
"nope, not worth privatizing this, because of reasons A, B, and C".
that's the crux of question 7 at the end "What are advantages and
disadvantages of NIST's potential transition of time services from a
NIST-only service to private sector operation..."


A lot of the questions at the end of the RFI are things that get
discussed on time-nuts from time to time.
Chris Albertson
2014-03-19 16:50:36 UTC
Permalink
So they want to in-invent NTP?

I think NTP already services way more than 6.5 billion per day. The
problem with NTP is while it is nearly optimal and provides the best
time accuracy for a given hardware/network setup it is not technically
"traceable" even if the time really is from NIST indirectly.

I think you could fix this traceability problem with some rules about
how to write the configuration files, no new software. For example
NTP already handles cryptographic authentication. Make the use of
this monitory so that then you know you are talking to a NIST
referenced server.
Post by Tom Van Baak
If you can design a system that can handle 6.5 billion requests per day, this opportunity is for you...
https://www.fbo.gov/spg/DOC/NIST/AcAsD/RFI_InternetTimeServiceComments/listing.html
Solicitation Number: RFI_InternetTimeServiceComments
Added: Mar 18, 2014 9:46 am
SUMMARY: National Institute of Standards and Technology (NIST), Department of Commerce, seeks information from the public on NIST's potential transition of time services from a NIST-only service to private sector operation of an ensemble of time servers that will provide NIST-traceable time information in a number of different formats over the public Internet.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Chris Albertson
Redondo Beach, California
Jim Lux
2014-03-19 17:21:17 UTC
Permalink
Post by Chris Albertson
So they want to in-invent NTP?
I think NTP already services way more than 6.5 billion per day. The
problem with NTP is while it is nearly optimal and provides the best
time accuracy for a given hardware/network setup it is not technically
"traceable" even if the time really is from NIST indirectly.
They are well aware of NTP.. they serve 6.5 billion a day *from NIST*
and that's what they are looking at potentially outsourcing or changing.

And, of course, there's the "legally traceable" aspect.
Post by Chris Albertson
I think you could fix this traceability problem with some rules about
how to write the configuration files, no new software. For example
NTP already handles cryptographic authentication. Make the use of
this monitory so that then you know you are talking to a NIST
referenced server.
That's very possible, and you could respond to the RFI and tell them so.
Could you set up a legally traceable set of multiple tiers?
What would the mechanics of this be?

That's really what the RFI is all about.. "tell us what you think we
need to know"..

They'll get responses that are overlapping existing knowledge, for sure.

And nobody is going to respond with something that is confidential or
proprietary or telegraphs a future product line, because all the RFI
responses are essentially public info.
Chuck Forsberg WA7KGX
2014-03-22 19:24:16 UTC
Permalink
I can see a use for an inexpensive GPSDO with a built-in
gigabit ethernet or USB3 port powering an NTP server.
Post by Jim Lux
Post by Chris Albertson
So they want to in-invent NTP?
I think NTP already services way more than 6.5 billion per day. The
problem with NTP is while it is nearly optimal and provides the best
time accuracy for a given hardware/network setup it is not technically
"traceable" even if the time really is from NIST indirectly.
They are well aware of NTP.. they serve 6.5 billion a day *from NIST*
and that's what they are looking at potentially outsourcing or changing.
And, of course, there's the "legally traceable" aspect.
Post by Chris Albertson
I think you could fix this traceability problem with some rules about
how to write the configuration files, no new software. For example
NTP already handles cryptographic authentication. Make the use of
this monitory so that then you know you are talking to a NIST
referenced server.
That's very possible, and you could respond to the RFI and tell them so.
Could you set up a legally traceable set of multiple tiers?
What would the mechanics of this be?
That's really what the RFI is all about.. "tell us what you think we
need to know"..
They'll get responses that are overlapping existing knowledge, for sure.
And nobody is going to respond with something that is confidential or
proprietary or telegraphs a future product line, because all the RFI
responses are essentially public info.
_______________________________________________
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Chuck Forsberg WA7KGX caf-***@public.gmane.org www.omen.com
Developer of Industrial ZMODEM(Tm) for Embedded Applications
Omen Technology Inc "The High Reliability Software"
10255 NW Old Cornelius Pass Portland OR 97231 503-614-0430
Chris Albertson
2014-03-22 19:54:21 UTC
Permalink
Post by Chuck Forsberg WA7KGX
I can see a use for an inexpensive GPSDO with a built-in
gigabit ethernet or USB3 port powering an NTP server.
Neither of those is a good way to transfer time to an NTP server.
Both Ethernet and USB are packetized. The best way is with a simple
wire with a square wave pulse on it that pulses ones per second.
Nothing can be more simple or accurate.

The trick is to build an NTP server that can react deterministically
to the pulse. I think an ARM based system could far outperform an
Intel based one. ARM has two independent PRUs. These are little
32-bit processes each with 4K of memory that are build right on the
same chip as the main ARM CPU. The PRUs purpose built for real time
task and can handle nanosecond level timing. In most existing system
the PRUs are ignored and everything is done using the ARM.

The other way to improve things even better is to not even bother to
have a link from the GPSDO to the NTP server. Why not simply run the
NTP server software on the same processor as the GPSDO? Just one of
the little PRUs is more than powerful enough to run a GPSDO. They are
a 32-bit uP that runs at 200MHz, one instruction per clock. The PRUs
don't run any operating system code but have access to all of the
ARM's memory and interrupts. A PRU is way-overkill for a GPSDO.
Doing this eliminates the link cable from the GPSDO to the NTP server.
If the ARM CPU can't handle 6 billion requests per day then buy many
copies the ARM based systems. They are cheap.
--
Chris Albertson
Redondo Beach, California
Mike George
2014-03-22 20:23:31 UTC
Permalink
The PRUs (Programmable Realtime Unit) aren't a feature of ARM in general
(they are not present
on the Raspberry Pi for instance). The BeagleBone has 2 PRUs as you
describe. It uses the TI Siatra
ARM variant.
ARM just describes the core architecture. Manufacturers tack on all
sorts of proprietary peripherals
depending on what they envision as it's primary target market.

Mike George
Post by Chris Albertson
Post by Chuck Forsberg WA7KGX
I can see a use for an inexpensive GPSDO with a built-in
gigabit ethernet or USB3 port powering an NTP server.
Neither of those is a good way to transfer time to an NTP server.
Both Ethernet and USB are packetized. The best way is with a simple
wire with a square wave pulse on it that pulses ones per second.
Nothing can be more simple or accurate.
The trick is to build an NTP server that can react deterministically
to the pulse. I think an ARM based system could far outperform an
Intel based one. ARM has two independent PRUs. These are little
32-bit processes each with 4K of memory that are build right on the
same chip as the main ARM CPU. The PRUs purpose built for real time
task and can handle nanosecond level timing. In most existing system
the PRUs are ignored and everything is done using the ARM.
The other way to improve things even better is to not even bother to
have a link from the GPSDO to the NTP server. Why not simply run the
NTP server software on the same processor as the GPSDO? Just one of
the little PRUs is more than powerful enough to run a GPSDO. They are
a 32-bit uP that runs at 200MHz, one instruction per clock. The PRUs
don't run any operating system code but have access to all of the
ARM's memory and interrupts. A PRU is way-overkill for a GPSDO.
Doing this eliminates the link cable from the GPSDO to the NTP server.
If the ARM CPU can't handle 6 billion requests per day then buy many
copies the ARM based systems. They are cheap.
Chris Albertson
2014-03-22 20:53:39 UTC
Permalink
Thanks, Yes of course "ARM" refers only to "ARM"

Would you know which other systems include the PRUs? Is it only in
the TI products? It seems like an ideal solution to the problem of
non-deterministic latency.

This may not even be required. There is no point to extreme levels of
accuracy because the weak link with any NTP server is the Internet.
NTP's purpose is to transfer time over unreliable data links and these
links will always be the limiting factor.
Post by Mike George
The PRUs (Programmable Realtime Unit) aren't a feature of ARM in general
(they are not present
on the Raspberry Pi for instance). The BeagleBone has 2 PRUs as you
describe. It uses the TI Siatra
ARM variant.
ARM just describes the core architecture. Manufacturers tack on all sorts
of proprietary peripherals
depending on what they envision as it's primary target market.
Mike George
Post by Chris Albertson
Post by Chuck Forsberg WA7KGX
I can see a use for an inexpensive GPSDO with a built-in
gigabit ethernet or USB3 port powering an NTP server.
Neither of those is a good way to transfer time to an NTP server.
Both Ethernet and USB are packetized. The best way is with a simple
wire with a square wave pulse on it that pulses ones per second.
Nothing can be more simple or accurate.
The trick is to build an NTP server that can react deterministically
to the pulse. I think an ARM based system could far outperform an
Intel based one. ARM has two independent PRUs. These are little
32-bit processes each with 4K of memory that are build right on the
same chip as the main ARM CPU. The PRUs purpose built for real time
task and can handle nanosecond level timing. In most existing system
the PRUs are ignored and everything is done using the ARM.
The other way to improve things even better is to not even bother to
have a link from the GPSDO to the NTP server. Why not simply run the
NTP server software on the same processor as the GPSDO? Just one of
the little PRUs is more than powerful enough to run a GPSDO. They are
a 32-bit uP that runs at 200MHz, one instruction per clock. The PRUs
don't run any operating system code but have access to all of the
ARM's memory and interrupts. A PRU is way-overkill for a GPSDO.
Doing this eliminates the link cable from the GPSDO to the NTP server.
If the ARM CPU can't handle 6 billion requests per day then buy many
copies the ARM based systems. They are cheap.
_______________________________________________
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
--
Chris Albertson
Redondo Beach, California
Brian Lloyd
2014-03-22 21:25:27 UTC
Permalink
On Sat, Mar 22, 2014 at 3:53 PM, Chris Albertson
Post by Chris Albertson
Thanks, Yes of course "ARM" refers only to "ARM"
Would you know which other systems include the PRUs? Is it only in
the TI products? It seems like an ideal solution to the problem of
non-deterministic latency.
This may not even be required. There is no point to extreme levels of
accuracy because the weak link with any NTP server is the Internet.
NTP's purpose is to transfer time over unreliable data links and these
links will always be the limiting factor.
NTP running in broadcast mode over a local Gig-E network shouldn't be too
bad. I suspect timing jitter is pretty low.
--
Brian Lloyd, WB6RQN/J79BPL
706 Flightline Drive
Spring Branch, TX 78070
brian-***@public.gmane.org
+1.916.877.5067
Chris Albertson
2014-03-22 22:55:30 UTC
Permalink
Post by Brian Lloyd
NTP running in broadcast mode over a local Gig-E network shouldn't be too
bad. I suspect timing jitter is pretty low.
Gigabit Ethernet can be actually worse than 100BaseT because of the
way the hardware works. The packets arrive so fast that interrupts
occurs per several packets, not per packet. So on Gigabit systems the
NTP packet might not get timestamped correctly because the interrupt
applies to a group of packets that all came in close in time to each
other. You are best off using 100BaseT

But as I wrote before if you are on a local network and are willing to
buy special PTP compatible hardware you can use PTP and avoid NTP.
PTP relies on external time stamps put on but the network hardware and
is about one order of magnitude better then NTP if you have the right
network hardware.

And if you REALLY care about timing you will distribute a PPS or a
10MHz reference

NTP is best used over the Internet. It was designed for unreliable data links.
--
Chris Albertson
Redondo Beach, California
Paul
2014-03-23 01:34:02 UTC
Permalink
On Sat, Mar 22, 2014 at 6:55 PM, Chris Albertson
Post by Chris Albertson
But as I wrote before if you are on a local network and are willing to
buy special PTP compatible hardware you can use PTP and avoid NTP.
PTP relies on external time stamps put on but the network hardware and
is about one order of magnitude better then NTP if you have the right
network hardware.
I believe this may be conventional wisdom but time-nuts shouldn't believe
conventional wisdom they should be measuring.
E.g. FSM says their NTP+PTP "servers" perform equally well using either
protocol. The trick is to use optimized NTP software and timestamping
hardware.
Post by Chris Albertson
And if you REALLY care about timing you will distribute a PPS or a
10MHz reference
Or, to rephrase "equally poorly using either protocol".
Chris Albertson
2014-03-23 17:37:13 UTC
Permalink
Post by Paul
E.g. FSM says their NTP+PTP "servers" perform equally well using either
protocol. The trick is to use optimized NTP software and timestamping
hardware.
Yes. If you modify NTP so that is does the same thing as PTP then it
will as good as PTP. That should be obvious.

NTP is modular and it is easy to write a new reference clock driver.
So if I had time stamping network hardware, I'd want an NTP driver for
it and would use it. But most network hardware lacks the feature.
--
Chris Albertson
Redondo Beach, California
Paul
2014-03-23 19:39:35 UTC
Permalink
On Sun, Mar 23, 2014 at 1:37 PM, Chris Albertson
Post by Chris Albertson
Yes. If you modify NTP so that is does the same thing as PTP then it
will as good as PTP. That should be obvious.
I believe you misunderstand my point.
Poul-Henning Kamp
2014-03-22 22:46:37 UTC
Permalink
The main problem for NIST or USNO's servers is not the actual time
transfer into the machine -- that is a solved problem, but rather
getting enough packets spit out precisely enough, with the required
signature to make it traceable.
--
Poul-Henning Kamp | UNIX since Zilog Zeus 3.20
phk-***@public.gmane.org | TCP/IP since RFC 956
FreeBSD committer | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
Mike George
2014-03-22 23:25:43 UTC
Permalink
PRU appears to be unique to TI.

I have only used Raspberry Pi, Beaglebone, and Cubieboard.
The Cubieboard (Allwinner CPU) has a lot of IO pins like the
Beaglebone but nothing like a PRU.

Mike George
Post by Chris Albertson
Thanks, Yes of course "ARM" refers only to "ARM"
Would you know which other systems include the PRUs? Is it only in
the TI products? It seems like an ideal solution to the problem of
non-deterministic latency.
This may not even be required. There is no point to extreme levels of
accuracy because the weak link with any NTP server is the Internet.
NTP's purpose is to transfer time over unreliable data links and these
links will always be the limiting factor.
Post by Mike George
The PRUs (Programmable Realtime Unit) aren't a feature of ARM in general
(they are not present
on the Raspberry Pi for instance). The BeagleBone has 2 PRUs as you
describe. It uses the TI Siatra
ARM variant.
ARM just describes the core architecture. Manufacturers tack on all sorts
of proprietary peripherals
depending on what they envision as it's primary target market.
Mike George
Post by Chris Albertson
Post by Chuck Forsberg WA7KGX
I can see a use for an inexpensive GPSDO with a built-in
gigabit ethernet or USB3 port powering an NTP server.
Neither of those is a good way to transfer time to an NTP server.
Both Ethernet and USB are packetized. The best way is with a simple
wire with a square wave pulse on it that pulses ones per second.
Nothing can be more simple or accurate.
The trick is to build an NTP server that can react deterministically
to the pulse. I think an ARM based system could far outperform an
Intel based one. ARM has two independent PRUs. These are little
32-bit processes each with 4K of memory that are build right on the
same chip as the main ARM CPU. The PRUs purpose built for real time
task and can handle nanosecond level timing. In most existing system
the PRUs are ignored and everything is done using the ARM.
The other way to improve things even better is to not even bother to
have a link from the GPSDO to the NTP server. Why not simply run the
NTP server software on the same processor as the GPSDO? Just one of
the little PRUs is more than powerful enough to run a GPSDO. They are
a 32-bit uP that runs at 200MHz, one instruction per clock. The PRUs
don't run any operating system code but have access to all of the
ARM's memory and interrupts. A PRU is way-overkill for a GPSDO.
Doing this eliminates the link cable from the GPSDO to the NTP server.
If the ARM CPU can't handle 6 billion requests per day then buy many
copies the ARM based systems. They are cheap.
_______________________________________________
To unsubscribe, go to
https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Brian Lloyd
2014-03-22 20:16:17 UTC
Permalink
Post by Chuck Forsberg WA7KGX
I can see a use for an inexpensive GPSDO with a built-in
gigabit ethernet or USB3 port powering an NTP server.
Why not a BeagleBoneBlack with a GPS module that has 1pps out connected to
an I/O pin. For that matter, add your OCXO and let the BBB discipline that
at the same time.

I bet you can come up with an NTP server and a GPSDO for not more than $200.
--
Brian Lloyd, WB6RQN/J79BPL
706 Flightline Drive
Spring Branch, TX 78070
brian-***@public.gmane.org
+1.916.877.5067
Paul
2014-03-22 20:38:45 UTC
Permalink
Post by Brian Lloyd
I bet you can come up with an NTP server and a GPSDO for not more than $200.
I believe the Laureline largely meets the spec. It all "open" -- hardware
and software.
It's not gigE but it was suggested you could swap in a 1588 PHY.
The downside is it's fake NTP so some of the interesting bits won't work
but those shouldn't concern the time-nut.
Jim Lux
2014-03-19 21:24:05 UTC
Permalink
-------- Original Message --------
Subject: Re: [time-nuts] NIST time services
Date: Wed, 19 Mar 2014 10:21:17 -0700
Post by Chris Albertson
So they want to in-invent NTP?
I think NTP already services way more than 6.5 billion per day. The
problem with NTP is while it is nearly optimal and provides the best
time accuracy for a given hardware/network setup it is not technically
"traceable" even if the time really is from NIST indirectly.
They are well aware of NTP.. they serve 6.5 billion a day *from NIST*
and that's what they are looking at potentially outsourcing or changing.

And, of course, there's the "legally traceable" aspect.
Post by Chris Albertson
I think you could fix this traceability problem with some rules about
how to write the configuration files, no new software. For example
NTP already handles cryptographic authentication. Make the use of
this monitory so that then you know you are talking to a NIST
referenced server.
That's very possible, and you could respond to the RFI and tell them so.
Could you set up a legally traceable set of multiple tiers?
What would the mechanics of this be?

That's really what the RFI is all about.. "tell us what you think we
need to know"..

They'll get responses that are overlapping existing knowledge, for sure.

And nobody is going to respond with something that is confidential or
proprietary or telegraphs a future product line, because all the RFI
responses are essentially public info.
Jason Rabel
2014-03-23 16:26:34 UTC
Permalink
Post by Chris Albertson
NTP is best used over the Internet. It was designed for unreliable data links.
In the quest for expansion of NTP over the internet, one thing has always nagged me.

You can find lists of servers and they will give a physical location along with other info about them...

Big whoop... Often these servers tend to be tied to one backbone, so even if they are physically located in the same city as me, the
packets still might have to travel thousands of miles just to switch networks. So what should be a 2ms delay has now become 20-40ms
(or more)... Even if they have multiple backbones, packets coming in are not guaranteed to leave on the same network. The more a
packet has to travel, the more uncertainty you build up... Yes NTP should still get you a reasonable time, but our quest is always
for something better.

If there was some sort of feature in NTP (maybe there already is???), or even a separate program that could "test" a list of NTP
servers to try and pick the lowest latency, I think that could have a positive benefit on better time transfer.
Bob Camp
2014-03-23 17:08:53 UTC
Permalink
Hi

You can (and many do) run through a list of servers with an NTP client and see what you get. It’s a bit of work, but you only do it once.

———

I suspect that what NIST is looking for is somebody in the cloud business (Amazon, Google, Microsoft, IBM) to step up and mention that they have 2,989,875 server racks scattered about the world and they would be happy to run NTP on them for “free”. (see fine print attached ….)

Bob
Post by Jason Rabel
Post by Chris Albertson
NTP is best used over the Internet. It was designed for unreliable data links.
In the quest for expansion of NTP over the internet, one thing has always nagged me.
You can find lists of servers and they will give a physical location along with other info about them...
Big whoop... Often these servers tend to be tied to one backbone, so even if they are physically located in the same city as me, the
packets still might have to travel thousands of miles just to switch networks. So what should be a 2ms delay has now become 20-40ms
(or more)... Even if they have multiple backbones, packets coming in are not guaranteed to leave on the same network. The more a
packet has to travel, the more uncertainty you build up... Yes NTP should still get you a reasonable time, but our quest is always
for something better.
If there was some sort of feature in NTP (maybe there already is???), or even a separate program that could "test" a list of NTP
servers to try and pick the lowest latency, I think that could have a positive benefit on better time transfer.
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.
Paul
2014-03-23 17:48:05 UTC
Permalink
Post by Bob Camp
I suspect that what NIST is looking for is somebody in the cloud business
(Amazon, Google, Microsoft, IBM) to step up and mention that they have
2,989,875 server racks scattered about the world and they would be happy to
run NTP on them for "free". (see fine print attached ....)
There's no mention of compensation in the solicitation for input however
they do want some things that might or might not fit the business models of
the large server companies:
-) Traceable time.
0) 180 day hold-over in the absence of GPS (presumably with less than a
microsend error).
1) Dedicated (low-latency) links to the UTC(NIST) ensemble
2) Notable oversight by NIST.
3) Geo dispersion.

Point three may seem a no-brainer but it disqualifies Amazon if they're
using only native infrastructure. It sounds like they want what they
should have gotten from Certichron/USTiming but didn't.

I suspect the best candidates would be someone like Hurricane or Equinix
with the Level3s in the second tier.
Jim Lux
2014-03-24 01:35:08 UTC
Permalink
Post by Paul
Post by Bob Camp
I suspect that what NIST is looking for is somebody in the cloud business
(Amazon, Google, Microsoft, IBM) to step up and mention that they have
2,989,875 server racks scattered about the world and they would be happy to
run NTP on them for "free". (see fine print attached ....)
There's no mention of compensation in the solicitation for input
An RFI isn't a solicitation (an offer to buy). It's more the
equivalent of mailing away for everyone's sales literature. If costs
are mentioned in someone's response, that might help NIST figure out
their cost/benefit and make vs buy analyses. For most RFIs, the
responses are public.



however
Post by Paul
they do want some things that might or might not fit the business models of
-) Traceable time.
0) 180 day hold-over in the absence of GPS (presumably with less than a
microsend error).
1) Dedicated (low-latency) links to the UTC(NIST) ensemble
2) Notable oversight by NIST.
3) Geo dispersion.
Point three may seem a no-brainer but it disqualifies Amazon if they're
using only native infrastructure. It sounds like they want what they
should have gotten from Certichron/USTiming but didn't.
I suspect the best candidates would be someone like Hurricane or Equinix
with the Level3s in the second tier.
Or, they might be looking for someone to be a system integrator, and put
it all together. That's what an RFI is all about.. get the ideas from
people who have them, so that when the solicitation does come out,
they're looking to buy something that someone is willing to sell.

It might also help them figure out what kind of budget they will need.


Note well that you don't have to be a provider of services to respond to
the RFI. If you have good ideas, but aren't able to implement them, for
whatever reason (maybe you personally don't want to be running a
business), you can still send them to NIST, and they'll factor into
their decision making and planning process.


When I've been involved in issuing RFIs in the past, often the best
ideas come from people/firms who aren't in the business. The folks in
the business are often loathe to publicly put their ideas out there,
because they fear it will telegraph information to their competitors
about future business plans. If you're not planning on competing, what
do you care who knows about your ideas.
Chris Albertson
2014-03-23 17:24:58 UTC
Permalink
On Sun, Mar 23, 2014 at 9:26 AM, Jason Rabel
Post by Jason Rabel
If there was some sort of feature in NTP (maybe there already is???), or even a separate program that could "test" a list of NTP
servers to try and pick the lowest latency, I think that could have a positive benefit on better time transfer.
Yes. This is exactly how NTP works. It constantly tests the servers
and selects the "best" subset of available reference clocks. This will
changes over time and it will change in real time.

There is a rather complex algorithm. First the set of clocks is
thinned down to what the code calls "true tickers", those are the
clocks the generally agree with the rest of the clocks. Then from
the clocks who are not "voted off the island" so to speak the time is
computed using a kind of weighting.

The assumption NTP makes is that you can judge the quality of a
server by the variance (of "jitter") in the time it reports.

So yes, the problem the problem and the solution you thought of was
build into NTP about 30 years ago. It fact that is the whole point of
it's being, to estimate the variance in round trip point times and use
this to determine how much to weight the results. NOW, the key
assumption NTP makes might be wrong and this is a large source of
error. It assumes the one-way jitter is 1/2 the round trip jitter.
If this is wrong it would give incorrect weight to a server.

NTP will eventually settle on the best few servers it finds but
continues to talk to all of them just because which are the "best"
will change over time.

A good example of this is a stratum 1 server that has a GPS connected.
Almost certainly it will also connect to other NTP servers and get
time from them as well as the GPS. Very quickly it will determine
that the GPS has the "best" performance and will use that. But if you
disconnect the GPS antenna it will very quickly find the next best
source(s) of time.

Another example is an "island network". Say you have five NTP servers
that are interconnected so that they each get time from the other
four. Normally they also use some Internet servers but when the
Internet goes down NTP will find which of the local island servers has
the most stable clock and those will cary the most weight in the
calculation of "consensus time" which is a weighted time based on all
"true tickers".
--
Chris Albertson
Redondo Beach, California
Magnus Danielson
2014-03-24 00:07:26 UTC
Permalink
Jason,
Post by Jason Rabel
Post by Chris Albertson
NTP is best used over the Internet. It was designed for unreliable data links.
In the quest for expansion of NTP over the internet, one thing has always nagged me.
You can find lists of servers and they will give a physical location along with other info about them...
Big whoop... Often these servers tend to be tied to one backbone, so even if they are physically located in the same city as me, the
packets still might have to travel thousands of miles just to switch networks. So what should be a 2ms delay has now become 20-40ms
(or more)... Even if they have multiple backbones, packets coming in are not guaranteed to leave on the same network. The more a
packet has to travel, the more uncertainty you build up... Yes NTP should still get you a reasonable time, but our quest is always
for something better.
If there was some sort of feature in NTP (maybe there already is???), or even a separate program that could "test" a list of NTP
servers to try and pick the lowest latency, I think that could have a positive benefit on better time transfer.
This hits straight into one of the problems with NTP. It tries to use
the highest stratum clock rather than best quality clock. A known trick
is to use a set of stratum 2 servers locally and only let local users
connect to those, and them have them peer between each other and to the
same stratum 1 clocks. This gives much better performance then letting
the clients to use the stratum 1 servers directly.

The hop-count is good to avoid routing loops, but it is not a good
indicator of achieved quality.

If we have decent intermediary, this would provide much better
performance. As fewer would query the top servers, the second level
could query them much more often, and better filter the result for the
benefit of performance.

But that would break the basic assumptions of NTP, and you can't do
that, not that the protocol would object.

Your general idea is however sound, and surely you can do stuff with
scripts.

Cheers,
Magnus
Chris Albertson
2014-03-24 04:51:12 UTC
Permalink
It's not really stratum based. The clock selection algorithm is described
here
http://www.eecis.udel.edu/~mills/ntp/html/select.html
Basically it "allows every clock that can logically contribute" That means
with estimated error bounds that over lap.

That with those not eliminated nTP applies a clustering algorithm to find
the set of clocks that will contribute to the weighted average time
http://www.eecis.udel.edu/~mills/ntp/html/cluster.html

The weight is not based on Stratum but it might turn out that many time it
looks like it does simply because the servers using GPS or atomic clocks
are very stable and get weighted up. The weight is 1/(root distance)
where root distance is computed from offset and jitter.

Do NOT look at the "billboard" display. It would have you think NTP picks
just one clock. It rarely does that.

The bottom line is that when setting up NTP you want to get it many clocks
and let it pick. You might even get it two GPS receivers or GPS and a Rb
derived PPS and then five pool servers as backup.


On Sun, Mar 23, 2014 at 5:07 PM, Magnus Danielson <
Post by Magnus Danielson
Jason,
Post by Chris Albertson
NTP is best used over the Internet. It was designed for unreliable data
Post by Chris Albertson
links.
In the quest for expansion of NTP over the internet, one thing has always nagged me.
You can find lists of servers and they will give a physical location
along with other info about them...
Big whoop... Often these servers tend to be tied to one backbone, so even
if they are physically located in the same city as me, the
packets still might have to travel thousands of miles just to switch
networks. So what should be a 2ms delay has now become 20-40ms
(or more)... Even if they have multiple backbones, packets coming in are
not guaranteed to leave on the same network. The more a
packet has to travel, the more uncertainty you build up... Yes NTP should
still get you a reasonable time, but our quest is always
for something better.
If there was some sort of feature in NTP (maybe there already is???), or
even a separate program that could "test" a list of NTP
servers to try and pick the lowest latency, I think that could have a
positive benefit on better time transfer.
This hits straight into one of the problems with NTP. It tries to use the
highest stratum clock rather than best quality clock. A known trick is to
use a set of stratum 2 servers locally and only let local users connect to
those, and them have them peer between each other and to the same stratum 1
clocks. This gives much better performance then letting the clients to use
the stratum 1 servers directly.
The hop-count is good to avoid routing loops, but it is not a good
indicator of achieved quality.
If we have decent intermediary, this would provide much better
performance. As fewer would query the top servers, the second level could
query them much more often, and better filter the result for the benefit of
performance.
But that would break the basic assumptions of NTP, and you can't do that,
not that the protocol would object.
Your general idea is however sound, and surely you can do stuff with
scripts.
Cheers,
Magnus
_______________________________________________
To unsubscribe, go to https://www.febo.com/cgi-bin/
mailman/listinfo/time-nuts
and follow the instructions there.
--
Chris Albertson
Redondo Beach, California
Hal Murray
2014-03-24 06:56:47 UTC
Permalink
Post by Jason Rabel
If there was some sort of feature in NTP (maybe there already is???), or
even a separate program that could "test" a list of NTP servers to try and
pick the lowest latency, I think that could have a positive benefit on
better time transfer.
The current ntp-dev is actually closer than you might expect. The pool
command gets a bunch of servers via DNS. If one of them stops responding, it
will get kicked out and replaced by another server.

It's only SMOP to make it kick out the worst server every now and then. That
should converge to at least N-1 good servers. (If you have N good ones, it
will kick out the worse which might get replaced by a poor server. I suppose
you could tweak the selection to require something like more than x% worse
than the average or best or ... Simpler to ask for N+1 if you want N.)
--
These are my opinions. I hate spam.
Hal Murray
2014-07-11 16:33:17 UTC
Permalink
A while back I even tried an old SveeSix receiver and those work too in the
TS2100.
I have a couple of SveeSixs in case anybody ever needs one.
--
These are my opinions. I hate spam.
Hal Murray
2014-07-27 18:29:56 UTC
Permalink
I compared against my Trimble modules... Those are ACE-III receivers.
Thanks. The ones I have match the pictures is data sheets I pulled off the
web.
--
These are my opinions. I hate spam.
Continue reading on narkive:
Loading...