-
Notifications
You must be signed in to change notification settings - Fork 47
Fix struct timeval units in DCE poll #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -36,7 +36,7 @@ int dce_poll (struct pollfd *fds, nfds_t nfds, int timeout) | |
| } | ||
| else if (timeout > 0) | ||
| { | ||
| endtime = Now () + MilliSeconds (timeout); | ||
| endtime = Now () + MicroSeconds (timeout); | ||
| } | ||
|
|
||
| for (uint32_t i = 0; i < nfds; ++i) | ||
|
|
@@ -208,7 +208,7 @@ int dce_select (int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, | |
|
|
||
| if (timeout) | ||
| { | ||
| pollTo = timeout->tv_sec * 1000 + timeout->tv_usec / 1000; | ||
| pollTo = timeout->tv_sec * 1000000 + timeout->tv_usec / 1000; | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I believe we should make the minimum resolution in UtilsAdvanceTime a global one
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I am now just reading the thread carefully for the first time. I had some off list discussion with Trish and she does not have a small testcase to contribute. It seems like there are two issues. First, when dce_select() calls dce_poll() with a non-zero timeout but less than 1ms, and second, when some application calls poll() with timeout of zero (and there needs to be a forced timestep to prevent infinite looping). For the second issue, if I understand correctly, Matt is proposing to pull out this magic value into a global so that it could be tuned by users more easily, and also round up to 1 timestep if the time set in the global is less than the resolution?? (e.g. if simulation timestep is 1ns and user configures the magic value to picoseconds) For the first issue, it seems like the issue is when dce_select calls dce_poll but wants sub-millisecond resolution. Here, I suggest possibly to create a dce_poll_internal method that replaces 'int timeout' with 'ns3::Time timeout', and have dce_select call this variant directly, and have the existing dce_poll() forward to this internal dce_poll_internal(). |
||
| } | ||
|
|
||
| int pollRet = dce_poll (pollFd, nfds, pollTo); | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the manpage does say it should be Milliseconds so I believe this change is incorrect
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, I will ask Trish to weigh in about this. It would be nice to have some test code..