-
|
Hello, This may be fairly trivial and hopefully possible. I'm trying to listen to when a rsync event occurs while a task is being run, have a conditional catch it, and in my case have an LED turn on every time a pulse goes out. The issue is that when I do have a conditional listening to rsync events (which are successfully going out and logging), nothing occurs. I'm assuming I'm referencing it wrong. Below is a short example. |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
|
The rsync event does not get processed by the state machine, it just goes straight to the data log. The line of interest is code/source/pyControl/hardware.py Line 509 in e6fe329 I believe this is intentional so the sync signal is as low overhead as possible? @ThomasAkam, should we add an option to the Rsync initialization for sending the event to the event queue? Something like... class Rsync(IO_object):
# Class for generating sync pulses with random inter-pulse interval.
def __init__(self, pin, event_name="rsync", mean_IPI=5000, pulse_dur=50, process_event=False):
assert 0.1 * mean_IPI > pulse_dur, "0.1*mean_IPI must be greater than pulse_dur"
self.sync_pin = pyb.Pin(pin, pyb.Pin.OUT)
self.event_name = event_name
self.pulse_dur = pulse_dur # Sync pulse duration (ms)
self.process_event = process_event
self.min_IPI = int(0.1 * mean_IPI)
self.max_IPI = int(1.9 * mean_IPI)
assign_ID(self)
def _initialise(self):
self.event_ID = sm.events[self.event_name] if self.event_name in sm.events else False
def _run_start(self):
if self.event_ID:
self.state = False # Whether output is high or low.
self._timer_callback()
def _run_stop(self):
self.sync_pin.value(False)
def _timer_callback(self):
if self.state: # Pin high -> low, set timer for next pulse.
timer.set(randint(self.min_IPI, self.max_IPI), fw.HARDW_TYP, "", self.ID)
else: # Pin low -> high, set timer for pulse duration.
timer.set(self.pulse_dur, fw.HARDW_TYP, "", self.ID)
if self.process_event:
fw.event_queue.put(fw.Datatuple(fw.current_time, fw.EVENT_TYP, "s", self.event_ID))
else:
fw.data_output_queue.put(fw.Datatuple(fw.current_time, fw.EVENT_TYP, "s", self.event_ID))
self.state = not self.state
self.sync_pin.value(self.state) |
Beta Was this translation helpful? Give feedback.
-
|
My feeling is that because the intended function of Rsync pulses is for synchronisation of multiple hardware systems, and conceptually the task state machine is independent of this, I am reluctant to allow sync pulses to be processed by the task state machine. It seems that the issue here, and also in annother discussion post here, is that sometimes it is useful to output sync pulses on more than one pyboard pin. I implemented a version of the Rsync class that can do that linked from the other discussion, but perhaps we should make this possible on the default Rsync class. The only reason not to do this is that it might slighly increase the overhead associated with generating a pulse due to iterating over a list of pins. @alustig3 what do you think? |
Beta Was this translation helpful? Give feedback.
-
|
I did some testing and it looks like allowing the standard Rsync class to support multiple pins will add minimal overhead (~8us per pulse if only a single pin is used), so I have implemented this in the dev branch and it will be included in the next release. |
Beta Was this translation helpful? Give feedback.
The rsync event does not get processed by the state machine, it just goes straight to the data log. The line of interest is
code/source/pyControl/hardware.py
Line 509 in e6fe329
I believe this is intentional so the sync signal is as low overhead as possible? @ThomasAkam, should we add an option to the Rsync initialization for sending the event to the event queue? Something like...