Regarding the streaming API, the developer web site says: “The streaming API connection will be disconnected under the following circumstances:…A client reads data too slowly. Every streaming connection is backed by a queue of messages to be sent to the client. If this queue grows too large over time, the connection will be closed.”
I’ve implemented code that connects to the companies data streaming API, receives and does some processing on the received data immediately. It is able to consume approximately 1,000 lines from the API per minute. When using it earlier today after a couple days off, my connection would be closed roughly every 11-13 minutes after receiving roughly 11,000 - 13,000 lines. Now that I have worked through my backlog, I am able to maintain a steady connection while streaming lines at about 50-100 per minute.
I’m wondering if the server was closing my connection earlier under the guise that my client was reading data too slowly. But the math doesn’t work out with the description provided above.
With a data generation rate of 50-100 per minute during the day time as cited above, there’s no way the queue backing up my connection was growing. It would have been shrinking. But I’m wondering if the description provided by CH isn’t complete: Is there a mechanism under which a queue above a certain size won’t be tolerated for longer than certain time, even if it is being steadily reduced in size? That would be consistent with the behavior I was experiencing.
Could one of the CH folks shed a little more light on how this mechanism works?