In practice, the actual latency, or ping, experienced when transmitting data from one place to another is higher than just the time it takes for light to travel the distance. This is due to a number of additional factors including routing, processing delays at each network device, the quality of the transmission medium, and more.
When data is transmitted through a fiber optic cable, it isn't just being sent directly from one place to another. It must pass through various devices such as switches, routers, and servers. Each of these devices takes a certain amount of time to process the data. This, combined with the actual physical distance the data must travel, is known as latency.
Transatlantic fiber optic cables, such as those connecting London and New York, generally achieve real-world one-way latencies of around 70-75 milliseconds. This is partly due to the fact that the actual path length of the fiber optic cables is longer than the direct 'as-the-crow-flies' distance. So a round-trip ping would be approximately double this value, or around 140-150 milliseconds.
For geostationary satellites, the latency is much higher due to the greater distance the signal must travel. A typical round-trip latency for geostationary satellite communication is around 500-600 milliseconds. This includes the time for the signal to travel up to the satellite, for the satellite to process and retransmit the signal, and for the signal to travel back down to Earth.
Again, these are typical values and actual latency can vary based on a number of factors including the specific equipment being used, the quality of the transmission medium, and the amount of network congestion.
TLDR; In reality the link latency differs depending on what fiber and which route is taken. For satellite 500 ms is pretty good average for geostationary orbit.