-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Buffering Issue #8
Comments
Not too sure will need to work this out with Dominique |
Agree. Talk to Dominique and come up with a plan. This is your top priority. Groovy support can wait. |
Do it with rosbag G On Tue, May 14, 2013 at 6:08 PM, Dhananjay Sathe
|
Not really. I need robot :-D the wifi and robo are the bottle necks. Rodbag
|
Since Vlad is busy having PCL on board this is not an option. 2 options:
G On Tue, May 14, 2013 at 6:56 PM, Dhananjay Sathe
|
I think what we need to test is - bloating up of rce-ros client. I'll test it with rosbag. |
Exactly! - G On Wed, May 15, 2013 at 12:19 PM, Mayank Singh [email protected]:
|
Implementation fix 2e1c21a should have always been push producers , this is what we are doing , it is a producer spewing output not generating on request. |
Depends on your code design. Pullproducer was fitting in the earlier design. Pushproducer suits your current design. |
Although a Push Provider does seem to fit the case for designing the buffer it does not shut down on its own , the responsibility of shutting of the transport is now with the buffer and we would have to manually shut transports and deal with the ugly behavior. |
Hey guys, I pushed a different buffer implementation. Have a look; branch is 'buffer-alternate'. I was not able to test it yet as I don't have any useful nodes and data to test this at home. Dominique |
Looks good, Seems it was a combination of a flag check and the problem with how these things were being called from different thread that did me in. Mayank has the nodes and rosbag to test it with , he will update us in a bit . A few observations/queries suggestions for how we would implement this. The idea of buffering was to be implemented for binary data alone . We could pass all messages through the queue and still (as currently done) and still get the desired result (perhaps this could be the reason they didn't work in all cases when i was using two channels ) using priorities. We happen to queue all messages and prune this queue on a set interval which seems fair. We could use 0 for rapyuta control messages (as currently done , not prune this ), 1 for json data messages (pruning optional as this depends on the experimental setup , we need to take a call on this ), 2 and higher for other kinds of data messages which are pruned and sent according to the priority ) Edit : if the messages that are not supposed to be prune are still found to be in the queue it probably means somethings is wrong with the connection , it would be better to explicitly inform the user about this or explicitly log this as a connection error and possibly take some steps rather than drop them or standard log them. Welcome insights. Regards. |
Hey guys, I tested both master and buffer-alternate with rosbag. Here's what I did:
The situation was the same for both master and buffer-alternate branches. I can't figure out why rce-ros doesn't bloat up if it can't transmit all sub-300 frames in the master branch which has no buffer implementation. This way I don't think it's possible to properly test this with rosbag. |
@dhananjaysathe As discussed pls talk to the autobahn folks about the issue and try to get some feedback. |
Will do |
Add a fixed width buffer on the client and the server side.
Sync the header with the binary blob? (@Sathe: Not sure if we were planing this for this week. Please update)
The text was updated successfully, but these errors were encountered: